00:00:00.002 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v23.11" build number 106 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3284 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.106 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.126 The recommended git tool is: git 00:00:00.127 using credential 00000000-0000-0000-0000-000000000002 00:00:00.129 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.158 Fetching changes from the remote Git repository 00:00:00.162 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.191 Using shallow fetch with depth 1 00:00:00.192 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.192 > git --version # timeout=10 00:00:00.213 > git --version # 'git version 2.39.2' 00:00:00.213 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.230 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.230 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.657 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.669 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.681 Checking out Revision 1c6ed56008363df82da0fcec030d6d5a1f7bd340 (FETCH_HEAD) 00:00:05.681 > git config core.sparsecheckout # timeout=10 00:00:05.691 > git read-tree -mu HEAD # timeout=10 00:00:05.708 > git checkout -f 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=5 00:00:05.730 Commit message: "spdk-abi-per-patch: pass revision to subbuild" 00:00:05.730 > git rev-list --no-walk 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=10 00:00:05.853 [Pipeline] Start of Pipeline 00:00:05.869 [Pipeline] library 00:00:05.870 Loading library shm_lib@master 00:00:05.870 Library shm_lib@master is cached. Copying from home. 00:00:05.883 [Pipeline] node 00:00:05.890 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.892 [Pipeline] { 00:00:05.901 [Pipeline] catchError 00:00:05.903 [Pipeline] { 00:00:05.915 [Pipeline] wrap 00:00:05.922 [Pipeline] { 00:00:05.927 [Pipeline] stage 00:00:05.929 [Pipeline] { (Prologue) 00:00:06.110 [Pipeline] sh 00:00:06.386 + logger -p user.info -t JENKINS-CI 00:00:06.407 [Pipeline] echo 00:00:06.409 Node: GP11 00:00:06.417 [Pipeline] sh 00:00:06.708 [Pipeline] setCustomBuildProperty 00:00:06.718 [Pipeline] echo 00:00:06.720 Cleanup processes 00:00:06.724 [Pipeline] sh 00:00:06.998 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.998 1147248 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.009 [Pipeline] sh 00:00:07.287 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.287 ++ grep -v 'sudo pgrep' 00:00:07.287 ++ awk '{print $1}' 00:00:07.287 + sudo kill -9 00:00:07.287 + true 00:00:07.298 [Pipeline] cleanWs 00:00:07.306 [WS-CLEANUP] Deleting project workspace... 00:00:07.306 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.311 [WS-CLEANUP] done 00:00:07.314 [Pipeline] setCustomBuildProperty 00:00:07.325 [Pipeline] sh 00:00:07.598 + sudo git config --global --replace-all safe.directory '*' 00:00:07.683 [Pipeline] httpRequest 00:00:07.728 [Pipeline] echo 00:00:07.729 Sorcerer 10.211.164.101 is alive 00:00:07.738 [Pipeline] httpRequest 00:00:07.741 HttpMethod: GET 00:00:07.743 URL: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:07.743 Sending request to url: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:07.744 Response Code: HTTP/1.1 200 OK 00:00:07.744 Success: Status code 200 is in the accepted range: 200,404 00:00:07.744 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:08.819 [Pipeline] sh 00:00:09.098 + tar --no-same-owner -xf jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:09.113 [Pipeline] httpRequest 00:00:09.142 [Pipeline] echo 00:00:09.144 Sorcerer 10.211.164.101 is alive 00:00:09.152 [Pipeline] httpRequest 00:00:09.155 HttpMethod: GET 00:00:09.155 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:09.156 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:09.169 Response Code: HTTP/1.1 200 OK 00:00:09.169 Success: Status code 200 is in the accepted range: 200,404 00:00:09.169 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:38.532 [Pipeline] sh 00:00:38.845 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:41.388 [Pipeline] sh 00:00:41.665 + git -C spdk log --oneline -n5 00:00:41.665 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:00:41.665 330a4f94d nvme: check pthread_mutex_destroy() return value 00:00:41.665 7b72c3ced nvme: add nvme_ctrlr_lock 00:00:41.665 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:00:41.665 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:00:41.683 [Pipeline] withCredentials 00:00:41.692 > git --version # timeout=10 00:00:41.706 > git --version # 'git version 2.39.2' 00:00:41.719 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:41.721 [Pipeline] { 00:00:41.730 [Pipeline] retry 00:00:41.732 [Pipeline] { 00:00:41.756 [Pipeline] sh 00:00:42.031 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:47.311 [Pipeline] } 00:00:47.335 [Pipeline] // retry 00:00:47.341 [Pipeline] } 00:00:47.362 [Pipeline] // withCredentials 00:00:47.373 [Pipeline] httpRequest 00:00:47.390 [Pipeline] echo 00:00:47.392 Sorcerer 10.211.164.101 is alive 00:00:47.401 [Pipeline] httpRequest 00:00:47.406 HttpMethod: GET 00:00:47.407 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:47.407 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:47.409 Response Code: HTTP/1.1 200 OK 00:00:47.409 Success: Status code 200 is in the accepted range: 200,404 00:00:47.410 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:52.468 [Pipeline] sh 00:00:52.755 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:54.673 [Pipeline] sh 00:00:54.953 + git -C dpdk log --oneline -n5 00:00:54.953 eeb0605f11 version: 23.11.0 00:00:54.953 238778122a doc: update release notes for 23.11 00:00:54.953 46aa6b3cfc doc: fix description of RSS features 00:00:54.953 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:54.953 7e421ae345 devtools: support skipping forbid rule check 00:00:54.963 [Pipeline] } 00:00:54.979 [Pipeline] // stage 00:00:54.987 [Pipeline] stage 00:00:54.989 [Pipeline] { (Prepare) 00:00:55.008 [Pipeline] writeFile 00:00:55.021 [Pipeline] sh 00:00:55.336 + logger -p user.info -t JENKINS-CI 00:00:55.345 [Pipeline] sh 00:00:55.617 + logger -p user.info -t JENKINS-CI 00:00:55.628 [Pipeline] sh 00:00:55.904 + cat autorun-spdk.conf 00:00:55.904 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.904 SPDK_TEST_NVMF=1 00:00:55.904 SPDK_TEST_NVME_CLI=1 00:00:55.904 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.904 SPDK_TEST_NVMF_NICS=e810 00:00:55.904 SPDK_TEST_VFIOUSER=1 00:00:55.904 SPDK_RUN_UBSAN=1 00:00:55.904 NET_TYPE=phy 00:00:55.904 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:55.904 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:55.910 RUN_NIGHTLY=1 00:00:55.915 [Pipeline] readFile 00:00:55.942 [Pipeline] withEnv 00:00:55.944 [Pipeline] { 00:00:55.959 [Pipeline] sh 00:00:56.238 + set -ex 00:00:56.238 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:56.238 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.238 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.238 ++ SPDK_TEST_NVMF=1 00:00:56.238 ++ SPDK_TEST_NVME_CLI=1 00:00:56.238 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.238 ++ SPDK_TEST_NVMF_NICS=e810 00:00:56.238 ++ SPDK_TEST_VFIOUSER=1 00:00:56.238 ++ SPDK_RUN_UBSAN=1 00:00:56.238 ++ NET_TYPE=phy 00:00:56.238 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:56.238 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:56.238 ++ RUN_NIGHTLY=1 00:00:56.238 + case $SPDK_TEST_NVMF_NICS in 00:00:56.238 + DRIVERS=ice 00:00:56.238 + [[ tcp == \r\d\m\a ]] 00:00:56.238 + [[ -n ice ]] 00:00:56.238 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:56.238 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:56.238 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:56.238 rmmod: ERROR: Module irdma is not currently loaded 00:00:56.238 rmmod: ERROR: Module i40iw is not currently loaded 00:00:56.238 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:56.238 + true 00:00:56.238 + for D in $DRIVERS 00:00:56.238 + sudo modprobe ice 00:00:56.238 + exit 0 00:00:56.251 [Pipeline] } 00:00:56.269 [Pipeline] // withEnv 00:00:56.274 [Pipeline] } 00:00:56.290 [Pipeline] // stage 00:00:56.299 [Pipeline] catchError 00:00:56.301 [Pipeline] { 00:00:56.316 [Pipeline] timeout 00:00:56.316 Timeout set to expire in 50 min 00:00:56.318 [Pipeline] { 00:00:56.333 [Pipeline] stage 00:00:56.335 [Pipeline] { (Tests) 00:00:56.351 [Pipeline] sh 00:00:56.630 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.630 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.630 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.630 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:56.630 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:56.630 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:56.630 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:56.630 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:56.630 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:56.630 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:56.630 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:56.630 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.630 + source /etc/os-release 00:00:56.630 ++ NAME='Fedora Linux' 00:00:56.630 ++ VERSION='38 (Cloud Edition)' 00:00:56.630 ++ ID=fedora 00:00:56.630 ++ VERSION_ID=38 00:00:56.630 ++ VERSION_CODENAME= 00:00:56.630 ++ PLATFORM_ID=platform:f38 00:00:56.630 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:56.630 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:56.630 ++ LOGO=fedora-logo-icon 00:00:56.630 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:56.630 ++ HOME_URL=https://fedoraproject.org/ 00:00:56.630 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:56.630 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:56.630 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:56.630 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:56.630 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:56.630 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:56.630 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:56.630 ++ SUPPORT_END=2024-05-14 00:00:56.630 ++ VARIANT='Cloud Edition' 00:00:56.630 ++ VARIANT_ID=cloud 00:00:56.630 + uname -a 00:00:56.630 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:56.630 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:57.562 Hugepages 00:00:57.562 node hugesize free / total 00:00:57.562 node0 1048576kB 0 / 0 00:00:57.562 node0 2048kB 0 / 0 00:00:57.562 node1 1048576kB 0 / 0 00:00:57.562 node1 2048kB 0 / 0 00:00:57.562 00:00:57.562 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:57.562 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:57.562 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:57.563 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:57.563 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:57.563 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:57.563 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:57.563 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:57.563 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:57.563 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:57.821 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:57.821 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:57.821 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:57.821 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:57.821 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:57.821 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:57.821 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:57.821 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:57.821 + rm -f /tmp/spdk-ld-path 00:00:57.821 + source autorun-spdk.conf 00:00:57.821 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.821 ++ SPDK_TEST_NVMF=1 00:00:57.821 ++ SPDK_TEST_NVME_CLI=1 00:00:57.821 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.821 ++ SPDK_TEST_NVMF_NICS=e810 00:00:57.821 ++ SPDK_TEST_VFIOUSER=1 00:00:57.821 ++ SPDK_RUN_UBSAN=1 00:00:57.821 ++ NET_TYPE=phy 00:00:57.821 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:57.821 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:57.821 ++ RUN_NIGHTLY=1 00:00:57.821 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:57.821 + [[ -n '' ]] 00:00:57.821 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:57.821 + for M in /var/spdk/build-*-manifest.txt 00:00:57.821 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:57.821 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:57.821 + for M in /var/spdk/build-*-manifest.txt 00:00:57.821 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:57.821 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:57.821 ++ uname 00:00:57.821 + [[ Linux == \L\i\n\u\x ]] 00:00:57.821 + sudo dmesg -T 00:00:57.821 + sudo dmesg --clear 00:00:57.821 + dmesg_pid=1147968 00:00:57.821 + [[ Fedora Linux == FreeBSD ]] 00:00:57.821 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:57.821 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:57.821 + sudo dmesg -Tw 00:00:57.821 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:57.821 + [[ -x /usr/src/fio-static/fio ]] 00:00:57.821 + export FIO_BIN=/usr/src/fio-static/fio 00:00:57.821 + FIO_BIN=/usr/src/fio-static/fio 00:00:57.821 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:57.821 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:57.821 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:57.821 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:57.821 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:57.821 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:57.821 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:57.821 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:57.821 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:57.821 Test configuration: 00:00:57.821 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.821 SPDK_TEST_NVMF=1 00:00:57.821 SPDK_TEST_NVME_CLI=1 00:00:57.821 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.821 SPDK_TEST_NVMF_NICS=e810 00:00:57.821 SPDK_TEST_VFIOUSER=1 00:00:57.821 SPDK_RUN_UBSAN=1 00:00:57.821 NET_TYPE=phy 00:00:57.821 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:57.821 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:57.821 RUN_NIGHTLY=1 18:31:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:57.821 18:31:08 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:57.821 18:31:08 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:57.821 18:31:08 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:57.821 18:31:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.821 18:31:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.821 18:31:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.821 18:31:08 -- paths/export.sh@5 -- $ export PATH 00:00:57.821 18:31:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.821 18:31:08 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:57.821 18:31:08 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:57.821 18:31:08 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721493068.XXXXXX 00:00:57.821 18:31:08 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721493068.yapeN4 00:00:57.821 18:31:08 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:57.821 18:31:08 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:00:57.821 18:31:08 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:57.821 18:31:08 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:57.821 18:31:08 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:57.822 18:31:08 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:57.822 18:31:08 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:57.822 18:31:08 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:57.822 18:31:08 -- common/autotest_common.sh@10 -- $ set +x 00:00:57.822 18:31:08 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:57.822 18:31:08 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:57.822 18:31:08 -- pm/common@17 -- $ local monitor 00:00:57.822 18:31:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.822 18:31:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.822 18:31:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.822 18:31:08 -- pm/common@21 -- $ date +%s 00:00:57.822 18:31:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.822 18:31:08 -- pm/common@21 -- $ date +%s 00:00:57.822 18:31:08 -- pm/common@25 -- $ sleep 1 00:00:57.822 18:31:08 -- pm/common@21 -- $ date +%s 00:00:57.822 18:31:08 -- pm/common@21 -- $ date +%s 00:00:57.822 18:31:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721493068 00:00:57.822 18:31:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721493068 00:00:57.822 18:31:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721493068 00:00:57.822 18:31:08 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721493068 00:00:57.822 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721493068_collect-vmstat.pm.log 00:00:58.080 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721493068_collect-cpu-load.pm.log 00:00:58.080 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721493068_collect-cpu-temp.pm.log 00:00:58.080 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721493068_collect-bmc-pm.bmc.pm.log 00:00:59.014 18:31:09 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:59.014 18:31:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:59.014 18:31:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:59.014 18:31:09 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.014 18:31:09 -- spdk/autobuild.sh@16 -- $ date -u 00:00:59.014 Sat Jul 20 04:31:09 PM UTC 2024 00:00:59.014 18:31:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:59.014 v24.05-13-g5fa2f5086 00:00:59.014 18:31:09 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:59.014 18:31:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:59.014 18:31:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:59.014 18:31:09 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:59.014 18:31:09 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:59.014 18:31:09 -- common/autotest_common.sh@10 -- $ set +x 00:00:59.014 ************************************ 00:00:59.014 START TEST ubsan 00:00:59.014 ************************************ 00:00:59.014 18:31:09 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:59.014 using ubsan 00:00:59.014 00:00:59.014 real 0m0.000s 00:00:59.014 user 0m0.000s 00:00:59.014 sys 0m0.000s 00:00:59.014 18:31:09 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:59.014 18:31:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:59.014 ************************************ 00:00:59.014 END TEST ubsan 00:00:59.014 ************************************ 00:00:59.014 18:31:09 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:00:59.014 18:31:09 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:59.014 18:31:09 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:59.014 18:31:09 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:00:59.014 18:31:09 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:59.014 18:31:09 -- common/autotest_common.sh@10 -- $ set +x 00:00:59.014 ************************************ 00:00:59.014 START TEST build_native_dpdk 00:00:59.014 ************************************ 00:00:59.014 18:31:09 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:00:59.014 eeb0605f11 version: 23.11.0 00:00:59.014 238778122a doc: update release notes for 23.11 00:00:59.014 46aa6b3cfc doc: fix description of RSS features 00:00:59.014 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:59.014 7e421ae345 devtools: support skipping forbid rule check 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:59.014 18:31:09 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:59.014 patching file config/rte_config.h 00:00:59.014 Hunk #1 succeeded at 60 (offset 1 line). 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:59.014 18:31:09 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:03.199 The Meson build system 00:01:03.199 Version: 1.3.1 00:01:03.199 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:03.199 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:03.199 Build type: native build 00:01:03.199 Program cat found: YES (/usr/bin/cat) 00:01:03.199 Project name: DPDK 00:01:03.199 Project version: 23.11.0 00:01:03.199 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:03.199 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:03.199 Host machine cpu family: x86_64 00:01:03.199 Host machine cpu: x86_64 00:01:03.199 Message: ## Building in Developer Mode ## 00:01:03.199 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:03.199 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:03.199 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:03.199 Program python3 found: YES (/usr/bin/python3) 00:01:03.199 Program cat found: YES (/usr/bin/cat) 00:01:03.199 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:03.199 Compiler for C supports arguments -march=native: YES 00:01:03.199 Checking for size of "void *" : 8 00:01:03.199 Checking for size of "void *" : 8 (cached) 00:01:03.199 Library m found: YES 00:01:03.199 Library numa found: YES 00:01:03.199 Has header "numaif.h" : YES 00:01:03.199 Library fdt found: NO 00:01:03.199 Library execinfo found: NO 00:01:03.199 Has header "execinfo.h" : YES 00:01:03.199 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:03.199 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:03.199 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:03.199 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:03.199 Run-time dependency openssl found: YES 3.0.9 00:01:03.199 Run-time dependency libpcap found: YES 1.10.4 00:01:03.199 Has header "pcap.h" with dependency libpcap: YES 00:01:03.199 Compiler for C supports arguments -Wcast-qual: YES 00:01:03.199 Compiler for C supports arguments -Wdeprecated: YES 00:01:03.199 Compiler for C supports arguments -Wformat: YES 00:01:03.199 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:03.199 Compiler for C supports arguments -Wformat-security: NO 00:01:03.199 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:03.199 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:03.199 Compiler for C supports arguments -Wnested-externs: YES 00:01:03.199 Compiler for C supports arguments -Wold-style-definition: YES 00:01:03.199 Compiler for C supports arguments -Wpointer-arith: YES 00:01:03.199 Compiler for C supports arguments -Wsign-compare: YES 00:01:03.199 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:03.199 Compiler for C supports arguments -Wundef: YES 00:01:03.199 Compiler for C supports arguments -Wwrite-strings: YES 00:01:03.199 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:03.199 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:03.199 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:03.199 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:03.199 Program objdump found: YES (/usr/bin/objdump) 00:01:03.199 Compiler for C supports arguments -mavx512f: YES 00:01:03.199 Checking if "AVX512 checking" compiles: YES 00:01:03.199 Fetching value of define "__SSE4_2__" : 1 00:01:03.199 Fetching value of define "__AES__" : 1 00:01:03.199 Fetching value of define "__AVX__" : 1 00:01:03.199 Fetching value of define "__AVX2__" : (undefined) 00:01:03.199 Fetching value of define "__AVX512BW__" : (undefined) 00:01:03.199 Fetching value of define "__AVX512CD__" : (undefined) 00:01:03.199 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:03.199 Fetching value of define "__AVX512F__" : (undefined) 00:01:03.199 Fetching value of define "__AVX512VL__" : (undefined) 00:01:03.199 Fetching value of define "__PCLMUL__" : 1 00:01:03.199 Fetching value of define "__RDRND__" : 1 00:01:03.199 Fetching value of define "__RDSEED__" : (undefined) 00:01:03.199 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:03.199 Fetching value of define "__znver1__" : (undefined) 00:01:03.199 Fetching value of define "__znver2__" : (undefined) 00:01:03.199 Fetching value of define "__znver3__" : (undefined) 00:01:03.199 Fetching value of define "__znver4__" : (undefined) 00:01:03.199 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:03.199 Message: lib/log: Defining dependency "log" 00:01:03.199 Message: lib/kvargs: Defining dependency "kvargs" 00:01:03.199 Message: lib/telemetry: Defining dependency "telemetry" 00:01:03.199 Checking for function "getentropy" : NO 00:01:03.199 Message: lib/eal: Defining dependency "eal" 00:01:03.199 Message: lib/ring: Defining dependency "ring" 00:01:03.199 Message: lib/rcu: Defining dependency "rcu" 00:01:03.199 Message: lib/mempool: Defining dependency "mempool" 00:01:03.199 Message: lib/mbuf: Defining dependency "mbuf" 00:01:03.199 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:03.199 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:03.199 Compiler for C supports arguments -mpclmul: YES 00:01:03.199 Compiler for C supports arguments -maes: YES 00:01:03.199 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:03.199 Compiler for C supports arguments -mavx512bw: YES 00:01:03.199 Compiler for C supports arguments -mavx512dq: YES 00:01:03.199 Compiler for C supports arguments -mavx512vl: YES 00:01:03.199 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:03.199 Compiler for C supports arguments -mavx2: YES 00:01:03.199 Compiler for C supports arguments -mavx: YES 00:01:03.199 Message: lib/net: Defining dependency "net" 00:01:03.199 Message: lib/meter: Defining dependency "meter" 00:01:03.199 Message: lib/ethdev: Defining dependency "ethdev" 00:01:03.199 Message: lib/pci: Defining dependency "pci" 00:01:03.199 Message: lib/cmdline: Defining dependency "cmdline" 00:01:03.199 Message: lib/metrics: Defining dependency "metrics" 00:01:03.199 Message: lib/hash: Defining dependency "hash" 00:01:03.199 Message: lib/timer: Defining dependency "timer" 00:01:03.199 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:03.199 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:03.199 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:03.199 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:03.199 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:03.199 Message: lib/acl: Defining dependency "acl" 00:01:03.199 Message: lib/bbdev: Defining dependency "bbdev" 00:01:03.199 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:03.199 Run-time dependency libelf found: YES 0.190 00:01:03.199 Message: lib/bpf: Defining dependency "bpf" 00:01:03.199 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:03.199 Message: lib/compressdev: Defining dependency "compressdev" 00:01:03.199 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:03.199 Message: lib/distributor: Defining dependency "distributor" 00:01:03.199 Message: lib/dmadev: Defining dependency "dmadev" 00:01:03.199 Message: lib/efd: Defining dependency "efd" 00:01:03.199 Message: lib/eventdev: Defining dependency "eventdev" 00:01:03.199 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:03.199 Message: lib/gpudev: Defining dependency "gpudev" 00:01:03.199 Message: lib/gro: Defining dependency "gro" 00:01:03.199 Message: lib/gso: Defining dependency "gso" 00:01:03.199 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:03.199 Message: lib/jobstats: Defining dependency "jobstats" 00:01:03.199 Message: lib/latencystats: Defining dependency "latencystats" 00:01:03.199 Message: lib/lpm: Defining dependency "lpm" 00:01:03.199 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:03.199 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:03.199 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:03.199 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:03.199 Message: lib/member: Defining dependency "member" 00:01:03.199 Message: lib/pcapng: Defining dependency "pcapng" 00:01:03.199 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:03.199 Message: lib/power: Defining dependency "power" 00:01:03.199 Message: lib/rawdev: Defining dependency "rawdev" 00:01:03.199 Message: lib/regexdev: Defining dependency "regexdev" 00:01:03.199 Message: lib/mldev: Defining dependency "mldev" 00:01:03.199 Message: lib/rib: Defining dependency "rib" 00:01:03.199 Message: lib/reorder: Defining dependency "reorder" 00:01:03.199 Message: lib/sched: Defining dependency "sched" 00:01:03.199 Message: lib/security: Defining dependency "security" 00:01:03.199 Message: lib/stack: Defining dependency "stack" 00:01:03.199 Has header "linux/userfaultfd.h" : YES 00:01:03.199 Has header "linux/vduse.h" : YES 00:01:03.199 Message: lib/vhost: Defining dependency "vhost" 00:01:03.199 Message: lib/ipsec: Defining dependency "ipsec" 00:01:03.199 Message: lib/pdcp: Defining dependency "pdcp" 00:01:03.199 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:03.199 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:03.199 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:03.199 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:03.199 Message: lib/fib: Defining dependency "fib" 00:01:03.199 Message: lib/port: Defining dependency "port" 00:01:03.199 Message: lib/pdump: Defining dependency "pdump" 00:01:03.199 Message: lib/table: Defining dependency "table" 00:01:03.199 Message: lib/pipeline: Defining dependency "pipeline" 00:01:03.199 Message: lib/graph: Defining dependency "graph" 00:01:03.199 Message: lib/node: Defining dependency "node" 00:01:04.579 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:04.579 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:04.579 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:04.579 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:04.579 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:04.579 Compiler for C supports arguments -Wno-unused-value: YES 00:01:04.579 Compiler for C supports arguments -Wno-format: YES 00:01:04.579 Compiler for C supports arguments -Wno-format-security: YES 00:01:04.580 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:04.580 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:04.580 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:04.580 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:04.580 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:04.580 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:04.580 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:04.580 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:04.580 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:04.580 Has header "sys/epoll.h" : YES 00:01:04.580 Program doxygen found: YES (/usr/bin/doxygen) 00:01:04.580 Configuring doxy-api-html.conf using configuration 00:01:04.580 Configuring doxy-api-man.conf using configuration 00:01:04.580 Program mandb found: YES (/usr/bin/mandb) 00:01:04.580 Program sphinx-build found: NO 00:01:04.580 Configuring rte_build_config.h using configuration 00:01:04.580 Message: 00:01:04.580 ================= 00:01:04.580 Applications Enabled 00:01:04.580 ================= 00:01:04.580 00:01:04.580 apps: 00:01:04.580 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:04.580 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:04.580 test-pmd, test-regex, test-sad, test-security-perf, 00:01:04.580 00:01:04.580 Message: 00:01:04.580 ================= 00:01:04.580 Libraries Enabled 00:01:04.580 ================= 00:01:04.580 00:01:04.580 libs: 00:01:04.580 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:04.580 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:04.580 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:04.580 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:04.580 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:04.580 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:04.580 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:04.580 00:01:04.580 00:01:04.580 Message: 00:01:04.580 =============== 00:01:04.580 Drivers Enabled 00:01:04.580 =============== 00:01:04.580 00:01:04.580 common: 00:01:04.580 00:01:04.580 bus: 00:01:04.580 pci, vdev, 00:01:04.580 mempool: 00:01:04.580 ring, 00:01:04.580 dma: 00:01:04.580 00:01:04.580 net: 00:01:04.580 i40e, 00:01:04.580 raw: 00:01:04.580 00:01:04.580 crypto: 00:01:04.580 00:01:04.580 compress: 00:01:04.580 00:01:04.580 regex: 00:01:04.580 00:01:04.580 ml: 00:01:04.580 00:01:04.580 vdpa: 00:01:04.580 00:01:04.580 event: 00:01:04.580 00:01:04.580 baseband: 00:01:04.580 00:01:04.580 gpu: 00:01:04.580 00:01:04.580 00:01:04.580 Message: 00:01:04.580 ================= 00:01:04.580 Content Skipped 00:01:04.580 ================= 00:01:04.580 00:01:04.580 apps: 00:01:04.580 00:01:04.580 libs: 00:01:04.580 00:01:04.580 drivers: 00:01:04.580 common/cpt: not in enabled drivers build config 00:01:04.580 common/dpaax: not in enabled drivers build config 00:01:04.580 common/iavf: not in enabled drivers build config 00:01:04.580 common/idpf: not in enabled drivers build config 00:01:04.580 common/mvep: not in enabled drivers build config 00:01:04.580 common/octeontx: not in enabled drivers build config 00:01:04.580 bus/auxiliary: not in enabled drivers build config 00:01:04.580 bus/cdx: not in enabled drivers build config 00:01:04.580 bus/dpaa: not in enabled drivers build config 00:01:04.580 bus/fslmc: not in enabled drivers build config 00:01:04.580 bus/ifpga: not in enabled drivers build config 00:01:04.580 bus/platform: not in enabled drivers build config 00:01:04.580 bus/vmbus: not in enabled drivers build config 00:01:04.580 common/cnxk: not in enabled drivers build config 00:01:04.580 common/mlx5: not in enabled drivers build config 00:01:04.580 common/nfp: not in enabled drivers build config 00:01:04.580 common/qat: not in enabled drivers build config 00:01:04.580 common/sfc_efx: not in enabled drivers build config 00:01:04.580 mempool/bucket: not in enabled drivers build config 00:01:04.580 mempool/cnxk: not in enabled drivers build config 00:01:04.580 mempool/dpaa: not in enabled drivers build config 00:01:04.580 mempool/dpaa2: not in enabled drivers build config 00:01:04.580 mempool/octeontx: not in enabled drivers build config 00:01:04.580 mempool/stack: not in enabled drivers build config 00:01:04.580 dma/cnxk: not in enabled drivers build config 00:01:04.580 dma/dpaa: not in enabled drivers build config 00:01:04.580 dma/dpaa2: not in enabled drivers build config 00:01:04.580 dma/hisilicon: not in enabled drivers build config 00:01:04.580 dma/idxd: not in enabled drivers build config 00:01:04.580 dma/ioat: not in enabled drivers build config 00:01:04.580 dma/skeleton: not in enabled drivers build config 00:01:04.580 net/af_packet: not in enabled drivers build config 00:01:04.580 net/af_xdp: not in enabled drivers build config 00:01:04.580 net/ark: not in enabled drivers build config 00:01:04.580 net/atlantic: not in enabled drivers build config 00:01:04.580 net/avp: not in enabled drivers build config 00:01:04.580 net/axgbe: not in enabled drivers build config 00:01:04.580 net/bnx2x: not in enabled drivers build config 00:01:04.580 net/bnxt: not in enabled drivers build config 00:01:04.580 net/bonding: not in enabled drivers build config 00:01:04.580 net/cnxk: not in enabled drivers build config 00:01:04.580 net/cpfl: not in enabled drivers build config 00:01:04.580 net/cxgbe: not in enabled drivers build config 00:01:04.580 net/dpaa: not in enabled drivers build config 00:01:04.580 net/dpaa2: not in enabled drivers build config 00:01:04.580 net/e1000: not in enabled drivers build config 00:01:04.580 net/ena: not in enabled drivers build config 00:01:04.580 net/enetc: not in enabled drivers build config 00:01:04.580 net/enetfec: not in enabled drivers build config 00:01:04.580 net/enic: not in enabled drivers build config 00:01:04.580 net/failsafe: not in enabled drivers build config 00:01:04.580 net/fm10k: not in enabled drivers build config 00:01:04.580 net/gve: not in enabled drivers build config 00:01:04.580 net/hinic: not in enabled drivers build config 00:01:04.580 net/hns3: not in enabled drivers build config 00:01:04.580 net/iavf: not in enabled drivers build config 00:01:04.580 net/ice: not in enabled drivers build config 00:01:04.580 net/idpf: not in enabled drivers build config 00:01:04.580 net/igc: not in enabled drivers build config 00:01:04.580 net/ionic: not in enabled drivers build config 00:01:04.580 net/ipn3ke: not in enabled drivers build config 00:01:04.580 net/ixgbe: not in enabled drivers build config 00:01:04.580 net/mana: not in enabled drivers build config 00:01:04.580 net/memif: not in enabled drivers build config 00:01:04.580 net/mlx4: not in enabled drivers build config 00:01:04.580 net/mlx5: not in enabled drivers build config 00:01:04.580 net/mvneta: not in enabled drivers build config 00:01:04.580 net/mvpp2: not in enabled drivers build config 00:01:04.580 net/netvsc: not in enabled drivers build config 00:01:04.580 net/nfb: not in enabled drivers build config 00:01:04.580 net/nfp: not in enabled drivers build config 00:01:04.580 net/ngbe: not in enabled drivers build config 00:01:04.580 net/null: not in enabled drivers build config 00:01:04.580 net/octeontx: not in enabled drivers build config 00:01:04.580 net/octeon_ep: not in enabled drivers build config 00:01:04.580 net/pcap: not in enabled drivers build config 00:01:04.580 net/pfe: not in enabled drivers build config 00:01:04.580 net/qede: not in enabled drivers build config 00:01:04.580 net/ring: not in enabled drivers build config 00:01:04.580 net/sfc: not in enabled drivers build config 00:01:04.580 net/softnic: not in enabled drivers build config 00:01:04.580 net/tap: not in enabled drivers build config 00:01:04.580 net/thunderx: not in enabled drivers build config 00:01:04.580 net/txgbe: not in enabled drivers build config 00:01:04.580 net/vdev_netvsc: not in enabled drivers build config 00:01:04.580 net/vhost: not in enabled drivers build config 00:01:04.580 net/virtio: not in enabled drivers build config 00:01:04.580 net/vmxnet3: not in enabled drivers build config 00:01:04.580 raw/cnxk_bphy: not in enabled drivers build config 00:01:04.580 raw/cnxk_gpio: not in enabled drivers build config 00:01:04.580 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:04.580 raw/ifpga: not in enabled drivers build config 00:01:04.580 raw/ntb: not in enabled drivers build config 00:01:04.580 raw/skeleton: not in enabled drivers build config 00:01:04.580 crypto/armv8: not in enabled drivers build config 00:01:04.580 crypto/bcmfs: not in enabled drivers build config 00:01:04.580 crypto/caam_jr: not in enabled drivers build config 00:01:04.580 crypto/ccp: not in enabled drivers build config 00:01:04.580 crypto/cnxk: not in enabled drivers build config 00:01:04.580 crypto/dpaa_sec: not in enabled drivers build config 00:01:04.580 crypto/dpaa2_sec: not in enabled drivers build config 00:01:04.580 crypto/ipsec_mb: not in enabled drivers build config 00:01:04.580 crypto/mlx5: not in enabled drivers build config 00:01:04.580 crypto/mvsam: not in enabled drivers build config 00:01:04.580 crypto/nitrox: not in enabled drivers build config 00:01:04.580 crypto/null: not in enabled drivers build config 00:01:04.580 crypto/octeontx: not in enabled drivers build config 00:01:04.580 crypto/openssl: not in enabled drivers build config 00:01:04.580 crypto/scheduler: not in enabled drivers build config 00:01:04.580 crypto/uadk: not in enabled drivers build config 00:01:04.580 crypto/virtio: not in enabled drivers build config 00:01:04.581 compress/isal: not in enabled drivers build config 00:01:04.581 compress/mlx5: not in enabled drivers build config 00:01:04.581 compress/octeontx: not in enabled drivers build config 00:01:04.581 compress/zlib: not in enabled drivers build config 00:01:04.581 regex/mlx5: not in enabled drivers build config 00:01:04.581 regex/cn9k: not in enabled drivers build config 00:01:04.581 ml/cnxk: not in enabled drivers build config 00:01:04.581 vdpa/ifc: not in enabled drivers build config 00:01:04.581 vdpa/mlx5: not in enabled drivers build config 00:01:04.581 vdpa/nfp: not in enabled drivers build config 00:01:04.581 vdpa/sfc: not in enabled drivers build config 00:01:04.581 event/cnxk: not in enabled drivers build config 00:01:04.581 event/dlb2: not in enabled drivers build config 00:01:04.581 event/dpaa: not in enabled drivers build config 00:01:04.581 event/dpaa2: not in enabled drivers build config 00:01:04.581 event/dsw: not in enabled drivers build config 00:01:04.581 event/opdl: not in enabled drivers build config 00:01:04.581 event/skeleton: not in enabled drivers build config 00:01:04.581 event/sw: not in enabled drivers build config 00:01:04.581 event/octeontx: not in enabled drivers build config 00:01:04.581 baseband/acc: not in enabled drivers build config 00:01:04.581 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:04.581 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:04.581 baseband/la12xx: not in enabled drivers build config 00:01:04.581 baseband/null: not in enabled drivers build config 00:01:04.581 baseband/turbo_sw: not in enabled drivers build config 00:01:04.581 gpu/cuda: not in enabled drivers build config 00:01:04.581 00:01:04.581 00:01:04.581 Build targets in project: 220 00:01:04.581 00:01:04.581 DPDK 23.11.0 00:01:04.581 00:01:04.581 User defined options 00:01:04.581 libdir : lib 00:01:04.581 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:04.581 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:04.581 c_link_args : 00:01:04.581 enable_docs : false 00:01:04.581 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:04.581 enable_kmods : false 00:01:04.581 machine : native 00:01:04.581 tests : false 00:01:04.581 00:01:04.581 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:04.581 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:04.581 18:31:14 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:04.581 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:04.581 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:04.581 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:04.581 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:04.581 [4/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:04.839 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:04.839 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:04.839 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:04.839 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:04.839 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:04.839 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:04.839 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:04.839 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:04.839 [13/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:04.839 [14/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:04.839 [15/710] Linking static target lib/librte_kvargs.a 00:01:04.839 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:04.839 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:04.839 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:05.102 [19/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:05.102 [20/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:05.102 [21/710] Linking static target lib/librte_log.a 00:01:05.102 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.672 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:05.672 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:05.672 [25/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:05.672 [26/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:05.672 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:05.672 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:05.672 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:05.672 [30/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.934 [31/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:05.934 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:05.935 [33/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:05.935 [34/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:05.935 [35/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:05.935 [36/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:05.935 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:05.935 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:05.935 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:05.935 [40/710] Linking target lib/librte_log.so.24.0 00:01:05.935 [41/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:05.935 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:05.935 [43/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:05.935 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:05.935 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:05.935 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:05.935 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:05.935 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:05.935 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:05.935 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:05.935 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:05.935 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:05.935 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:05.935 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:05.935 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:05.935 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:05.935 [57/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:06.216 [58/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:06.216 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:06.216 [60/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:06.216 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:06.216 [62/710] Linking target lib/librte_kvargs.so.24.0 00:01:06.216 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:06.216 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:06.521 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:06.521 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:06.521 [67/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:06.521 [68/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:06.521 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:06.521 [70/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:06.521 [71/710] Linking static target lib/librte_pci.a 00:01:06.787 [72/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:06.787 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:06.787 [74/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:06.787 [75/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:06.787 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:06.787 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:06.787 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:06.787 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:06.787 [80/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.787 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:07.049 [82/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:07.050 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:07.050 [84/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:07.050 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:07.050 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:07.050 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:07.050 [88/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:07.050 [89/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:07.050 [90/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:07.050 [91/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:07.050 [92/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:07.050 [93/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:07.050 [94/710] Linking static target lib/librte_ring.a 00:01:07.050 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:07.050 [96/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:07.050 [97/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:07.050 [98/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:07.314 [99/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:07.314 [100/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:07.314 [101/710] Linking static target lib/librte_meter.a 00:01:07.314 [102/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:07.314 [103/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:07.314 [104/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:07.314 [105/710] Linking static target lib/librte_telemetry.a 00:01:07.314 [106/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:07.314 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:07.314 [108/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:07.314 [109/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:07.314 [110/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:07.314 [111/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:07.314 [112/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:07.578 [113/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:07.578 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:07.578 [115/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.578 [116/710] Linking static target lib/librte_eal.a 00:01:07.578 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.578 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:07.578 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:07.578 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:07.578 [121/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:07.837 [122/710] Linking static target lib/librte_net.a 00:01:07.837 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:07.837 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:07.837 [125/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:07.837 [126/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:07.837 [127/710] Linking static target lib/librte_mempool.a 00:01:07.837 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:07.837 [129/710] Linking static target lib/librte_cmdline.a 00:01:08.100 [130/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.100 [131/710] Linking target lib/librte_telemetry.so.24.0 00:01:08.100 [132/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:08.100 [133/710] Linking static target lib/librte_cfgfile.a 00:01:08.100 [134/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.100 [135/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:08.100 [136/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:08.100 [137/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:08.100 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:08.100 [139/710] Linking static target lib/librte_metrics.a 00:01:08.100 [140/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:08.363 [141/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:08.363 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:08.363 [143/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:08.363 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:08.363 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:08.363 [146/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:08.630 [147/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:08.630 [148/710] Linking static target lib/librte_bitratestats.a 00:01:08.630 [149/710] Linking static target lib/librte_rcu.a 00:01:08.630 [150/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:08.630 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:08.630 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:08.630 [153/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.630 [154/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:08.630 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:08.898 [156/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.898 [157/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.898 [158/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:08.898 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:08.898 [160/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:08.898 [161/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:08.898 [162/710] Linking static target lib/librte_timer.a 00:01:08.898 [163/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.898 [164/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:08.898 [165/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.898 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:09.158 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:09.158 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:09.158 [169/710] Linking static target lib/librte_bbdev.a 00:01:09.158 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:09.159 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.421 [172/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:09.421 [173/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:09.421 [174/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:09.421 [175/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.421 [176/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:09.421 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:09.421 [178/710] Linking static target lib/librte_compressdev.a 00:01:09.421 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:09.684 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:09.684 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:09.944 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:09.944 [183/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:09.944 [184/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:09.944 [185/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:09.944 [186/710] Linking static target lib/librte_distributor.a 00:01:09.944 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.205 [188/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:10.205 [189/710] Linking static target lib/librte_dmadev.a 00:01:10.205 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:10.205 [191/710] Linking static target lib/librte_bpf.a 00:01:10.205 [192/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:10.205 [193/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:10.205 [194/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.205 [195/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:10.466 [196/710] Linking static target lib/librte_dispatcher.a 00:01:10.466 [197/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:10.466 [198/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:10.466 [199/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:10.466 [200/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:10.466 [201/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:10.466 [202/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.466 [203/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:10.466 [204/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:10.466 [205/710] Linking static target lib/librte_gpudev.a 00:01:10.466 [206/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:10.467 [207/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:10.467 [208/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:10.467 [209/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:10.467 [210/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:10.467 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:10.726 [212/710] Linking static target lib/librte_gro.a 00:01:10.726 [213/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.726 [214/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:10.726 [215/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:10.726 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.726 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:10.726 [218/710] Linking static target lib/librte_jobstats.a 00:01:10.991 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:10.991 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:10.991 [221/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.991 [222/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:10.991 [223/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.254 [224/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:11.254 [225/710] Linking static target lib/librte_latencystats.a 00:01:11.254 [226/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:11.254 [227/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:11.254 [228/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.254 [229/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:11.254 [230/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:11.254 [231/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:11.517 [232/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:11.517 [233/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:11.517 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:11.517 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:11.517 [236/710] Linking static target lib/librte_ip_frag.a 00:01:11.517 [237/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.517 [238/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:11.776 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:11.776 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:11.776 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:11.776 [242/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:11.776 [243/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.776 [244/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:12.044 [245/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:12.044 [246/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.044 [247/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:12.044 [248/710] Linking static target lib/librte_gso.a 00:01:12.044 [249/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:12.044 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:12.307 [251/710] Linking static target lib/librte_regexdev.a 00:01:12.307 [252/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:12.307 [253/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:12.307 [254/710] Linking static target lib/librte_rawdev.a 00:01:12.307 [255/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:12.307 [256/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:12.307 [257/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:12.307 [258/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.307 [259/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:12.567 [260/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:12.567 [261/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:12.567 [262/710] Linking static target lib/librte_mldev.a 00:01:12.567 [263/710] Linking static target lib/librte_efd.a 00:01:12.567 [264/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:12.567 [265/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:12.567 [266/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:12.567 [267/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:12.567 [268/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:12.567 [269/710] Linking static target lib/librte_pcapng.a 00:01:12.835 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:12.835 [271/710] Linking static target lib/librte_stack.a 00:01:12.835 [272/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:12.835 [273/710] Linking static target lib/librte_lpm.a 00:01:12.835 [274/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:12.835 [275/710] Linking static target lib/acl/libavx2_tmp.a 00:01:12.835 [276/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:12.835 [277/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:12.835 [278/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.835 [279/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:13.094 [280/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:13.094 [281/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.094 [282/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:13.094 [283/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:13.094 [284/710] Linking static target lib/librte_hash.a 00:01:13.094 [285/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:13.094 [286/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.094 [287/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.094 [288/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:13.354 [289/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:13.354 [290/710] Linking static target lib/acl/libavx512_tmp.a 00:01:13.354 [291/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:13.354 [292/710] Linking static target lib/librte_acl.a 00:01:13.354 [293/710] Linking static target lib/librte_power.a 00:01:13.354 [294/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:13.354 [295/710] Linking static target lib/librte_reorder.a 00:01:13.354 [296/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:13.354 [297/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.354 [298/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.354 [299/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:13.354 [300/710] Linking static target lib/librte_security.a 00:01:13.617 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:13.617 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:13.617 [303/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:13.617 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:13.617 [305/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:13.617 [306/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:13.617 [307/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:13.617 [308/710] Linking static target lib/librte_mbuf.a 00:01:13.617 [309/710] Linking static target lib/librte_rib.a 00:01:13.883 [310/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.883 [311/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.883 [312/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:13.883 [313/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.883 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:13.883 [315/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:13.883 [316/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:14.142 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:14.142 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.142 [319/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:14.142 [320/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:14.142 [321/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:14.142 [322/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:14.142 [323/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:14.142 [324/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:14.142 [325/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.404 [326/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:14.404 [327/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.404 [328/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:14.404 [329/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.665 [330/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.665 [331/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:14.665 [332/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:14.923 [333/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:14.923 [334/710] Linking static target lib/librte_eventdev.a 00:01:14.923 [335/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:14.923 [336/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:14.923 [337/710] Linking static target lib/librte_member.a 00:01:14.923 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:14.923 [339/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:15.187 [340/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:15.187 [341/710] Linking static target lib/librte_cryptodev.a 00:01:15.187 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:15.187 [343/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:15.187 [344/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:15.446 [345/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:15.446 [346/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:15.446 [347/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:15.446 [348/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:15.446 [349/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:15.446 [350/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:15.446 [351/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:15.446 [352/710] Linking static target lib/librte_ethdev.a 00:01:15.446 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.446 [354/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:15.446 [355/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:15.706 [356/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:15.706 [357/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:15.706 [358/710] Linking static target lib/librte_sched.a 00:01:15.706 [359/710] Linking static target lib/librte_fib.a 00:01:15.706 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:15.706 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:15.706 [362/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:15.706 [363/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:15.706 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:15.706 [365/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:15.968 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:15.968 [367/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:15.968 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:15.968 [369/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:16.230 [370/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.230 [371/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:16.230 [372/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:16.230 [373/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.492 [374/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:16.492 [375/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:16.492 [376/710] Linking static target lib/librte_pdump.a 00:01:16.492 [377/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:16.492 [378/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:16.492 [379/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:16.755 [380/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:16.755 [381/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:16.755 [382/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:16.755 [383/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:16.755 [384/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:16.755 [385/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:16.755 [386/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:16.755 [387/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:16.755 [388/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:16.755 [389/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.755 [390/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:17.015 [391/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:17.015 [392/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:17.015 [393/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:17.015 [394/710] Linking static target lib/librte_ipsec.a 00:01:17.280 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:17.280 [396/710] Linking static target lib/librte_table.a 00:01:17.280 [397/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:17.280 [398/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.280 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:17.540 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:17.540 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.540 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:17.803 [403/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:17.803 [404/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:18.077 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:18.077 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:18.077 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:18.077 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:18.077 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:18.077 [410/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:18.077 [411/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.077 [412/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:18.077 [413/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:18.343 [414/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:18.343 [415/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:18.343 [416/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:18.343 [417/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.343 [418/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.343 [419/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:18.602 [420/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:18.602 [421/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:18.602 [422/710] Linking target lib/librte_eal.so.24.0 00:01:18.602 [423/710] Linking static target lib/librte_port.a 00:01:18.602 [424/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:18.602 [425/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:18.602 [426/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:18.602 [427/710] Linking static target drivers/librte_bus_vdev.a 00:01:18.602 [428/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:18.870 [429/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:18.870 [430/710] Linking target lib/librte_ring.so.24.0 00:01:18.870 [431/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:18.870 [432/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:18.870 [433/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:18.870 [434/710] Linking target lib/librte_meter.so.24.0 00:01:18.870 [435/710] Linking target lib/librte_pci.so.24.0 00:01:19.133 [436/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:19.133 [437/710] Linking target lib/librte_timer.so.24.0 00:01:19.133 [438/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:19.133 [439/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.133 [440/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:19.133 [441/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:19.133 [442/710] Linking target lib/librte_acl.so.24.0 00:01:19.133 [443/710] Linking target lib/librte_cfgfile.so.24.0 00:01:19.133 [444/710] Linking target lib/librte_dmadev.so.24.0 00:01:19.133 [445/710] Linking target lib/librte_rcu.so.24.0 00:01:19.133 [446/710] Linking target lib/librte_mempool.so.24.0 00:01:19.133 [447/710] Linking target lib/librte_jobstats.so.24.0 00:01:19.133 [448/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:19.133 [449/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:19.133 [450/710] Linking static target lib/librte_graph.a 00:01:19.396 [451/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:19.396 [452/710] Linking target lib/librte_rawdev.so.24.0 00:01:19.396 [453/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:19.396 [454/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:19.396 [455/710] Linking target lib/librte_stack.so.24.0 00:01:19.396 [456/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:19.396 [457/710] Linking static target drivers/librte_bus_pci.a 00:01:19.396 [458/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:19.396 [459/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:19.396 [460/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:19.396 [461/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:19.396 [462/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:19.396 [463/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.396 [464/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:19.396 [465/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:19.396 [466/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:19.396 [467/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:19.396 [468/710] Linking target lib/librte_rib.so.24.0 00:01:19.396 [469/710] Linking target lib/librte_mbuf.so.24.0 00:01:19.655 [470/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:19.655 [471/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:19.655 [472/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:19.655 [473/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:19.655 [474/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:19.655 [475/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:19.655 [476/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:19.655 [477/710] Linking static target drivers/librte_mempool_ring.a 00:01:19.917 [478/710] Linking target lib/librte_fib.so.24.0 00:01:19.917 [479/710] Linking target lib/librte_bbdev.so.24.0 00:01:19.917 [480/710] Linking target lib/librte_net.so.24.0 00:01:19.917 [481/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:19.917 [482/710] Linking target lib/librte_compressdev.so.24.0 00:01:19.917 [483/710] Linking target lib/librte_cryptodev.so.24.0 00:01:19.917 [484/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:19.917 [485/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:19.917 [486/710] Linking target lib/librte_gpudev.so.24.0 00:01:19.917 [487/710] Linking target lib/librte_distributor.so.24.0 00:01:19.917 [488/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:19.917 [489/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:19.917 [490/710] Linking target lib/librte_regexdev.so.24.0 00:01:19.917 [491/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:19.917 [492/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:19.917 [493/710] Linking target lib/librte_mldev.so.24.0 00:01:20.201 [494/710] Linking target lib/librte_reorder.so.24.0 00:01:20.201 [495/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:20.201 [496/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:20.201 [497/710] Linking target lib/librte_sched.so.24.0 00:01:20.201 [498/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:20.201 [499/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:20.201 [500/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:20.201 [501/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.201 [502/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:20.201 [503/710] Linking target lib/librte_cmdline.so.24.0 00:01:20.201 [504/710] Linking target lib/librte_hash.so.24.0 00:01:20.201 [505/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:20.202 [506/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:20.202 [507/710] Linking target lib/librte_security.so.24.0 00:01:20.202 [508/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:20.202 [509/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:20.202 [510/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:20.202 [511/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.465 [512/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:20.465 [513/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:20.465 [514/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:20.465 [515/710] Linking target lib/librte_efd.so.24.0 00:01:20.465 [516/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:20.465 [517/710] Linking target lib/librte_lpm.so.24.0 00:01:20.465 [518/710] Linking target lib/librte_member.so.24.0 00:01:20.725 [519/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:20.725 [520/710] Linking target lib/librte_ipsec.so.24.0 00:01:20.725 [521/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:20.725 [522/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:20.725 [523/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:20.725 [524/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:20.725 [525/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:20.725 [526/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:20.984 [527/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:20.984 [528/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:21.244 [529/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:21.244 [530/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:21.244 [531/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:21.244 [532/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:21.505 [533/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:21.505 [534/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:21.505 [535/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:21.505 [536/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:21.505 [537/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:21.766 [538/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:21.766 [539/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:21.766 [540/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:21.766 [541/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:22.036 [542/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:22.036 [543/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:22.036 [544/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:22.036 [545/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:22.298 [546/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:22.298 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:22.298 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:22.298 [549/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:22.298 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:22.298 [551/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:22.298 [552/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:22.298 [553/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:22.298 [554/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:22.298 [555/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:22.559 [556/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:22.559 [557/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:22.559 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:22.831 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:23.090 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:23.090 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:23.351 [562/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:23.351 [563/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:23.617 [564/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:23.617 [565/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:23.617 [566/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:23.617 [567/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:23.617 [568/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.617 [569/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:23.617 [570/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:23.892 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:23.892 [572/710] Linking target lib/librte_ethdev.so.24.0 00:01:23.892 [573/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:23.892 [574/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:23.892 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:23.892 [576/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:23.892 [577/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:23.892 [578/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:24.165 [579/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:24.165 [580/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:24.165 [581/710] Linking target lib/librte_metrics.so.24.0 00:01:24.165 [582/710] Linking target lib/librte_bpf.so.24.0 00:01:24.165 [583/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:24.165 [584/710] Linking target lib/librte_eventdev.so.24.0 00:01:24.165 [585/710] Linking target lib/librte_gro.so.24.0 00:01:24.427 [586/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:24.427 [587/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:24.427 [588/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:24.427 [589/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:24.427 [590/710] Linking target lib/librte_gso.so.24.0 00:01:24.427 [591/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:24.427 [592/710] Linking target lib/librte_ip_frag.so.24.0 00:01:24.427 [593/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:24.427 [594/710] Linking target lib/librte_pcapng.so.24.0 00:01:24.427 [595/710] Linking target lib/librte_power.so.24.0 00:01:24.427 [596/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:24.427 [597/710] Linking target lib/librte_bitratestats.so.24.0 00:01:24.427 [598/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:24.427 [599/710] Linking target lib/librte_latencystats.so.24.0 00:01:24.687 [600/710] Linking target lib/librte_dispatcher.so.24.0 00:01:24.687 [601/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:24.687 [602/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:24.687 [603/710] Linking static target lib/librte_pdcp.a 00:01:24.687 [604/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:24.687 [605/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:24.687 [606/710] Linking target lib/librte_pdump.so.24.0 00:01:24.687 [607/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:24.687 [608/710] Linking target lib/librte_port.so.24.0 00:01:24.687 [609/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:24.953 [610/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:24.953 [611/710] Linking target lib/librte_graph.so.24.0 00:01:24.953 [612/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:24.953 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:24.953 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:24.953 [615/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:24.953 [616/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:25.213 [617/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:25.213 [618/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.213 [619/710] Linking target lib/librte_table.so.24.0 00:01:25.213 [620/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:25.213 [621/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:25.213 [622/710] Linking target lib/librte_pdcp.so.24.0 00:01:25.213 [623/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:25.213 [624/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:25.213 [625/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:25.474 [626/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:25.474 [627/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:25.474 [628/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:25.737 [629/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:25.737 [630/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:25.995 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:25.995 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:25.995 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:25.995 [634/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:25.995 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:25.995 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:25.995 [637/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:26.253 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:26.253 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:26.253 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:26.253 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:26.253 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:26.511 [643/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:26.511 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:26.511 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:26.769 [646/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:26.769 [647/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:26.769 [648/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:26.769 [649/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:27.027 [650/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:27.027 [651/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:27.027 [652/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:27.027 [653/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:27.285 [654/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:27.285 [655/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:27.285 [656/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:27.285 [657/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:27.285 [658/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:27.285 [659/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:27.285 [660/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:27.543 [661/710] Linking static target drivers/librte_net_i40e.a 00:01:27.543 [662/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:27.801 [663/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:27.801 [664/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.059 [665/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:28.059 [666/710] Linking target drivers/librte_net_i40e.so.24.0 00:01:28.059 [667/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:28.059 [668/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:28.317 [669/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:28.317 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:28.574 [671/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:28.832 [672/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:28.832 [673/710] Linking static target lib/librte_node.a 00:01:29.090 [674/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.090 [675/710] Linking target lib/librte_node.so.24.0 00:01:29.347 [676/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:30.719 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:30.719 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:30.719 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:32.093 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:33.028 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:38.316 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:10.379 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:10.379 [684/710] Linking static target lib/librte_vhost.a 00:02:10.379 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.379 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:25.252 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:25.252 [688/710] Linking static target lib/librte_pipeline.a 00:02:25.252 [689/710] Linking target app/dpdk-proc-info 00:02:25.252 [690/710] Linking target app/dpdk-test-acl 00:02:25.252 [691/710] Linking target app/dpdk-dumpcap 00:02:25.252 [692/710] Linking target app/dpdk-pdump 00:02:25.252 [693/710] Linking target app/dpdk-test-cmdline 00:02:25.252 [694/710] Linking target app/dpdk-test-pipeline 00:02:25.252 [695/710] Linking target app/dpdk-test-dma-perf 00:02:25.252 [696/710] Linking target app/dpdk-test-sad 00:02:25.252 [697/710] Linking target app/dpdk-test-fib 00:02:25.252 [698/710] Linking target app/dpdk-test-flow-perf 00:02:25.252 [699/710] Linking target app/dpdk-test-gpudev 00:02:25.252 [700/710] Linking target app/dpdk-test-regex 00:02:25.252 [701/710] Linking target app/dpdk-graph 00:02:25.253 [702/710] Linking target app/dpdk-test-mldev 00:02:25.253 [703/710] Linking target app/dpdk-test-security-perf 00:02:25.253 [704/710] Linking target app/dpdk-test-crypto-perf 00:02:25.253 [705/710] Linking target app/dpdk-test-compress-perf 00:02:25.253 [706/710] Linking target app/dpdk-test-bbdev 00:02:25.253 [707/710] Linking target app/dpdk-test-eventdev 00:02:25.253 [708/710] Linking target app/dpdk-testpmd 00:02:27.152 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.152 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:27.152 18:32:37 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:27.152 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:27.152 [0/1] Installing files. 00:02:27.152 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:27.153 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:27.154 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.155 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:27.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:27.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:27.156 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:27.415 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:27.416 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:27.416 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.416 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:27.417 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:28.022 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:28.022 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:28.022 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.022 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:28.022 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:28.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:28.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:28.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:28.026 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:28.026 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:28.026 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:28.026 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:28.026 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:28.026 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:28.026 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:28.026 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:28.026 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:28.026 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:28.026 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:28.026 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:28.026 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:28.026 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:28.026 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:28.026 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:28.026 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:28.026 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:28.027 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:28.027 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:28.027 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:28.027 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:28.027 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:28.027 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:28.027 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:28.027 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:28.027 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:28.027 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:28.027 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:28.027 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:28.027 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:28.027 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:28.027 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:28.027 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:28.027 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:28.027 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:28.027 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:28.027 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:28.027 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:28.027 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:28.027 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:28.027 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:28.027 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:28.027 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:28.027 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:28.027 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:28.027 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:28.027 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:28.027 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:28.027 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:28.027 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:28.027 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:28.027 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:28.027 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:28.027 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:28.027 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:28.027 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:28.027 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:28.027 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:28.027 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:28.027 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:28.027 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:28.027 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:28.027 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:28.027 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:28.027 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:28.027 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:28.027 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:28.027 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:28.027 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:28.027 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:28.027 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:28.027 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:28.027 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:28.027 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:28.027 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:28.027 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:28.027 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:28.027 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:28.027 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:28.027 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:28.027 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:28.027 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:28.027 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:28.027 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:28.027 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:28.027 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:28.027 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:28.027 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:28.027 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:28.027 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:28.027 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:28.027 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:28.027 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:28.027 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:28.027 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:28.027 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:28.027 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:28.027 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:28.027 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:28.027 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:28.027 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:28.027 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:28.027 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:28.027 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:28.027 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:28.027 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:28.027 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:28.027 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:28.027 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:28.027 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:28.027 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:28.027 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:28.027 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:28.027 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:28.027 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:28.027 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:28.027 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:28.027 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:28.027 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:28.027 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:28.027 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:28.028 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:28.028 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:28.028 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:28.028 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:28.028 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:28.028 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:28.028 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:28.028 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:28.028 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:28.028 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:28.028 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:28.028 18:32:38 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:28.028 18:32:38 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:28.028 18:32:38 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:28.028 18:32:38 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.028 00:02:28.028 real 1m29.055s 00:02:28.028 user 17m59.873s 00:02:28.028 sys 2m5.580s 00:02:28.028 18:32:38 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:28.028 18:32:38 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:28.028 ************************************ 00:02:28.028 END TEST build_native_dpdk 00:02:28.028 ************************************ 00:02:28.028 18:32:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:28.028 18:32:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:28.028 18:32:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:28.028 18:32:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:28.028 18:32:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:28.028 18:32:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:28.028 18:32:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:28.028 18:32:38 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:28.285 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:28.285 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:28.285 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:28.285 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:28.543 Using 'verbs' RDMA provider 00:02:39.071 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:47.174 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:47.431 Creating mk/config.mk...done. 00:02:47.431 Creating mk/cc.flags.mk...done. 00:02:47.431 Type 'make' to build. 00:02:47.431 18:32:57 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:47.431 18:32:57 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:47.431 18:32:57 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:47.431 18:32:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.431 ************************************ 00:02:47.431 START TEST make 00:02:47.431 ************************************ 00:02:47.431 18:32:57 make -- common/autotest_common.sh@1121 -- $ make -j48 00:02:47.689 make[1]: Nothing to be done for 'all'. 00:02:49.077 The Meson build system 00:02:49.077 Version: 1.3.1 00:02:49.077 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:49.077 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:49.077 Build type: native build 00:02:49.077 Project name: libvfio-user 00:02:49.077 Project version: 0.0.1 00:02:49.077 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:49.077 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:49.077 Host machine cpu family: x86_64 00:02:49.077 Host machine cpu: x86_64 00:02:49.077 Run-time dependency threads found: YES 00:02:49.077 Library dl found: YES 00:02:49.077 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:49.077 Run-time dependency json-c found: YES 0.17 00:02:49.077 Run-time dependency cmocka found: YES 1.1.7 00:02:49.077 Program pytest-3 found: NO 00:02:49.077 Program flake8 found: NO 00:02:49.077 Program misspell-fixer found: NO 00:02:49.077 Program restructuredtext-lint found: NO 00:02:49.077 Program valgrind found: YES (/usr/bin/valgrind) 00:02:49.077 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:49.077 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:49.077 Compiler for C supports arguments -Wwrite-strings: YES 00:02:49.077 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:49.077 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:49.077 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:49.077 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:49.077 Build targets in project: 8 00:02:49.077 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:49.077 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:49.077 00:02:49.077 libvfio-user 0.0.1 00:02:49.077 00:02:49.077 User defined options 00:02:49.077 buildtype : debug 00:02:49.077 default_library: shared 00:02:49.077 libdir : /usr/local/lib 00:02:49.077 00:02:49.077 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.027 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:50.292 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:50.292 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:50.292 [3/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:50.292 [4/37] Compiling C object samples/server.p/server.c.o 00:02:50.292 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:50.292 [6/37] Compiling C object samples/null.p/null.c.o 00:02:50.292 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:50.292 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:50.292 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:50.292 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:50.292 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:50.292 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:50.292 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:50.292 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:50.292 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:50.292 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:50.292 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:50.292 [18/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:50.292 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:50.292 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:50.292 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:50.292 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:50.292 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:50.554 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:50.554 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:50.554 [26/37] Compiling C object samples/client.p/client.c.o 00:02:50.554 [27/37] Linking target samples/client 00:02:50.554 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:50.554 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:50.554 [30/37] Linking target test/unit_tests 00:02:50.554 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:50.819 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:50.819 [33/37] Linking target samples/null 00:02:50.819 [34/37] Linking target samples/server 00:02:50.819 [35/37] Linking target samples/gpio-pci-idio-16 00:02:50.819 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:50.819 [37/37] Linking target samples/lspci 00:02:50.819 INFO: autodetecting backend as ninja 00:02:50.819 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:51.085 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:51.652 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:51.652 ninja: no work to do. 00:03:03.880 CC lib/ut/ut.o 00:03:03.880 CC lib/log/log.o 00:03:03.880 CC lib/log/log_flags.o 00:03:03.880 CC lib/log/log_deprecated.o 00:03:03.880 CC lib/ut_mock/mock.o 00:03:03.880 LIB libspdk_log.a 00:03:03.880 LIB libspdk_ut.a 00:03:03.880 LIB libspdk_ut_mock.a 00:03:03.880 SO libspdk_ut.so.2.0 00:03:03.880 SO libspdk_log.so.7.0 00:03:03.880 SO libspdk_ut_mock.so.6.0 00:03:03.880 SYMLINK libspdk_ut.so 00:03:03.880 SYMLINK libspdk_log.so 00:03:03.880 SYMLINK libspdk_ut_mock.so 00:03:03.880 CXX lib/trace_parser/trace.o 00:03:03.880 CC lib/dma/dma.o 00:03:03.880 CC lib/ioat/ioat.o 00:03:03.880 CC lib/util/base64.o 00:03:03.880 CC lib/util/bit_array.o 00:03:03.880 CC lib/util/cpuset.o 00:03:03.880 CC lib/util/crc16.o 00:03:03.880 CC lib/util/crc32.o 00:03:03.880 CC lib/util/crc32c.o 00:03:03.880 CC lib/util/crc32_ieee.o 00:03:03.880 CC lib/util/crc64.o 00:03:03.880 CC lib/util/dif.o 00:03:03.880 CC lib/util/fd.o 00:03:03.880 CC lib/util/file.o 00:03:03.880 CC lib/util/hexlify.o 00:03:03.880 CC lib/util/iov.o 00:03:03.880 CC lib/util/math.o 00:03:03.880 CC lib/util/pipe.o 00:03:03.880 CC lib/util/strerror_tls.o 00:03:03.880 CC lib/util/string.o 00:03:03.880 CC lib/util/uuid.o 00:03:03.880 CC lib/util/fd_group.o 00:03:03.880 CC lib/util/xor.o 00:03:03.880 CC lib/util/zipf.o 00:03:03.880 CC lib/vfio_user/host/vfio_user_pci.o 00:03:03.880 CC lib/vfio_user/host/vfio_user.o 00:03:03.880 LIB libspdk_dma.a 00:03:03.880 SO libspdk_dma.so.4.0 00:03:03.880 SYMLINK libspdk_dma.so 00:03:03.880 LIB libspdk_ioat.a 00:03:03.880 SO libspdk_ioat.so.7.0 00:03:03.880 LIB libspdk_vfio_user.a 00:03:03.880 SYMLINK libspdk_ioat.so 00:03:03.880 SO libspdk_vfio_user.so.5.0 00:03:03.880 SYMLINK libspdk_vfio_user.so 00:03:03.880 LIB libspdk_util.a 00:03:04.138 SO libspdk_util.so.9.0 00:03:04.138 SYMLINK libspdk_util.so 00:03:04.396 CC lib/idxd/idxd.o 00:03:04.396 CC lib/rdma/common.o 00:03:04.396 CC lib/json/json_parse.o 00:03:04.396 CC lib/rdma/rdma_verbs.o 00:03:04.396 CC lib/conf/conf.o 00:03:04.396 CC lib/idxd/idxd_user.o 00:03:04.396 CC lib/env_dpdk/env.o 00:03:04.396 CC lib/vmd/vmd.o 00:03:04.396 CC lib/idxd/idxd_kernel.o 00:03:04.396 CC lib/env_dpdk/memory.o 00:03:04.396 CC lib/vmd/led.o 00:03:04.396 CC lib/json/json_util.o 00:03:04.396 CC lib/env_dpdk/pci.o 00:03:04.396 CC lib/json/json_write.o 00:03:04.396 CC lib/env_dpdk/init.o 00:03:04.396 CC lib/env_dpdk/threads.o 00:03:04.397 CC lib/env_dpdk/pci_ioat.o 00:03:04.397 CC lib/env_dpdk/pci_virtio.o 00:03:04.397 CC lib/env_dpdk/pci_vmd.o 00:03:04.397 CC lib/env_dpdk/pci_idxd.o 00:03:04.397 CC lib/env_dpdk/pci_event.o 00:03:04.397 CC lib/env_dpdk/sigbus_handler.o 00:03:04.397 CC lib/env_dpdk/pci_dpdk.o 00:03:04.397 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:04.397 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:04.397 LIB libspdk_trace_parser.a 00:03:04.397 SO libspdk_trace_parser.so.5.0 00:03:04.654 SYMLINK libspdk_trace_parser.so 00:03:04.654 LIB libspdk_conf.a 00:03:04.654 SO libspdk_conf.so.6.0 00:03:04.654 LIB libspdk_rdma.a 00:03:04.654 SYMLINK libspdk_conf.so 00:03:04.654 LIB libspdk_json.a 00:03:04.654 SO libspdk_rdma.so.6.0 00:03:04.654 SO libspdk_json.so.6.0 00:03:04.911 SYMLINK libspdk_rdma.so 00:03:04.911 SYMLINK libspdk_json.so 00:03:04.911 CC lib/jsonrpc/jsonrpc_server.o 00:03:04.911 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:04.911 CC lib/jsonrpc/jsonrpc_client.o 00:03:04.911 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:04.911 LIB libspdk_idxd.a 00:03:04.911 SO libspdk_idxd.so.12.0 00:03:05.169 LIB libspdk_vmd.a 00:03:05.169 SYMLINK libspdk_idxd.so 00:03:05.169 SO libspdk_vmd.so.6.0 00:03:05.169 SYMLINK libspdk_vmd.so 00:03:05.169 LIB libspdk_jsonrpc.a 00:03:05.169 SO libspdk_jsonrpc.so.6.0 00:03:05.426 SYMLINK libspdk_jsonrpc.so 00:03:05.427 CC lib/rpc/rpc.o 00:03:05.684 LIB libspdk_rpc.a 00:03:05.684 SO libspdk_rpc.so.6.0 00:03:05.684 SYMLINK libspdk_rpc.so 00:03:05.941 CC lib/keyring/keyring.o 00:03:05.941 CC lib/trace/trace.o 00:03:05.941 CC lib/trace/trace_flags.o 00:03:05.941 CC lib/notify/notify.o 00:03:05.941 CC lib/keyring/keyring_rpc.o 00:03:05.941 CC lib/trace/trace_rpc.o 00:03:05.941 CC lib/notify/notify_rpc.o 00:03:06.199 LIB libspdk_notify.a 00:03:06.199 SO libspdk_notify.so.6.0 00:03:06.199 LIB libspdk_keyring.a 00:03:06.199 SYMLINK libspdk_notify.so 00:03:06.199 LIB libspdk_trace.a 00:03:06.199 SO libspdk_keyring.so.1.0 00:03:06.199 SO libspdk_trace.so.10.0 00:03:06.199 SYMLINK libspdk_keyring.so 00:03:06.199 SYMLINK libspdk_trace.so 00:03:06.457 LIB libspdk_env_dpdk.a 00:03:06.457 CC lib/thread/thread.o 00:03:06.457 CC lib/thread/iobuf.o 00:03:06.457 CC lib/sock/sock.o 00:03:06.457 CC lib/sock/sock_rpc.o 00:03:06.457 SO libspdk_env_dpdk.so.14.0 00:03:06.715 SYMLINK libspdk_env_dpdk.so 00:03:06.715 LIB libspdk_sock.a 00:03:06.974 SO libspdk_sock.so.9.0 00:03:06.974 SYMLINK libspdk_sock.so 00:03:06.974 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:06.974 CC lib/nvme/nvme_ctrlr.o 00:03:06.974 CC lib/nvme/nvme_fabric.o 00:03:06.974 CC lib/nvme/nvme_ns_cmd.o 00:03:06.974 CC lib/nvme/nvme_ns.o 00:03:06.974 CC lib/nvme/nvme_pcie_common.o 00:03:06.974 CC lib/nvme/nvme_pcie.o 00:03:06.974 CC lib/nvme/nvme_qpair.o 00:03:06.974 CC lib/nvme/nvme.o 00:03:06.974 CC lib/nvme/nvme_quirks.o 00:03:06.974 CC lib/nvme/nvme_transport.o 00:03:06.974 CC lib/nvme/nvme_discovery.o 00:03:06.974 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:06.974 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:06.974 CC lib/nvme/nvme_tcp.o 00:03:06.974 CC lib/nvme/nvme_opal.o 00:03:06.974 CC lib/nvme/nvme_io_msg.o 00:03:06.974 CC lib/nvme/nvme_poll_group.o 00:03:06.974 CC lib/nvme/nvme_zns.o 00:03:06.974 CC lib/nvme/nvme_stubs.o 00:03:06.974 CC lib/nvme/nvme_auth.o 00:03:06.974 CC lib/nvme/nvme_cuse.o 00:03:06.974 CC lib/nvme/nvme_vfio_user.o 00:03:06.974 CC lib/nvme/nvme_rdma.o 00:03:07.906 LIB libspdk_thread.a 00:03:07.906 SO libspdk_thread.so.10.0 00:03:08.162 SYMLINK libspdk_thread.so 00:03:08.162 CC lib/blob/blobstore.o 00:03:08.162 CC lib/accel/accel.o 00:03:08.162 CC lib/virtio/virtio.o 00:03:08.162 CC lib/init/json_config.o 00:03:08.162 CC lib/vfu_tgt/tgt_endpoint.o 00:03:08.162 CC lib/blob/request.o 00:03:08.162 CC lib/init/subsystem.o 00:03:08.162 CC lib/virtio/virtio_vhost_user.o 00:03:08.162 CC lib/accel/accel_rpc.o 00:03:08.162 CC lib/vfu_tgt/tgt_rpc.o 00:03:08.162 CC lib/blob/zeroes.o 00:03:08.162 CC lib/init/subsystem_rpc.o 00:03:08.162 CC lib/virtio/virtio_vfio_user.o 00:03:08.162 CC lib/accel/accel_sw.o 00:03:08.162 CC lib/blob/blob_bs_dev.o 00:03:08.162 CC lib/init/rpc.o 00:03:08.162 CC lib/virtio/virtio_pci.o 00:03:08.420 LIB libspdk_init.a 00:03:08.678 SO libspdk_init.so.5.0 00:03:08.678 LIB libspdk_virtio.a 00:03:08.678 LIB libspdk_vfu_tgt.a 00:03:08.678 SYMLINK libspdk_init.so 00:03:08.678 SO libspdk_vfu_tgt.so.3.0 00:03:08.678 SO libspdk_virtio.so.7.0 00:03:08.678 SYMLINK libspdk_vfu_tgt.so 00:03:08.678 SYMLINK libspdk_virtio.so 00:03:08.678 CC lib/event/app.o 00:03:08.678 CC lib/event/reactor.o 00:03:08.678 CC lib/event/log_rpc.o 00:03:08.678 CC lib/event/app_rpc.o 00:03:08.678 CC lib/event/scheduler_static.o 00:03:09.245 LIB libspdk_event.a 00:03:09.245 SO libspdk_event.so.13.0 00:03:09.245 SYMLINK libspdk_event.so 00:03:09.245 LIB libspdk_accel.a 00:03:09.245 SO libspdk_accel.so.15.0 00:03:09.504 LIB libspdk_nvme.a 00:03:09.504 SYMLINK libspdk_accel.so 00:03:09.504 SO libspdk_nvme.so.13.0 00:03:09.504 CC lib/bdev/bdev.o 00:03:09.504 CC lib/bdev/bdev_rpc.o 00:03:09.504 CC lib/bdev/bdev_zone.o 00:03:09.504 CC lib/bdev/part.o 00:03:09.504 CC lib/bdev/scsi_nvme.o 00:03:09.763 SYMLINK libspdk_nvme.so 00:03:11.139 LIB libspdk_blob.a 00:03:11.139 SO libspdk_blob.so.11.0 00:03:11.398 SYMLINK libspdk_blob.so 00:03:11.398 CC lib/blobfs/blobfs.o 00:03:11.398 CC lib/blobfs/tree.o 00:03:11.398 CC lib/lvol/lvol.o 00:03:12.339 LIB libspdk_bdev.a 00:03:12.339 SO libspdk_bdev.so.15.0 00:03:12.339 SYMLINK libspdk_bdev.so 00:03:12.339 LIB libspdk_blobfs.a 00:03:12.339 SO libspdk_blobfs.so.10.0 00:03:12.339 SYMLINK libspdk_blobfs.so 00:03:12.339 CC lib/ublk/ublk.o 00:03:12.339 CC lib/scsi/dev.o 00:03:12.339 CC lib/ublk/ublk_rpc.o 00:03:12.339 CC lib/nvmf/ctrlr.o 00:03:12.339 CC lib/nbd/nbd.o 00:03:12.339 CC lib/scsi/lun.o 00:03:12.339 CC lib/nvmf/ctrlr_discovery.o 00:03:12.339 CC lib/nbd/nbd_rpc.o 00:03:12.339 CC lib/scsi/port.o 00:03:12.339 CC lib/nvmf/ctrlr_bdev.o 00:03:12.339 CC lib/ftl/ftl_core.o 00:03:12.339 CC lib/scsi/scsi.o 00:03:12.339 CC lib/nvmf/subsystem.o 00:03:12.339 CC lib/ftl/ftl_init.o 00:03:12.339 CC lib/scsi/scsi_bdev.o 00:03:12.339 CC lib/nvmf/nvmf.o 00:03:12.339 CC lib/ftl/ftl_layout.o 00:03:12.339 CC lib/scsi/scsi_pr.o 00:03:12.339 CC lib/nvmf/nvmf_rpc.o 00:03:12.339 CC lib/ftl/ftl_debug.o 00:03:12.339 CC lib/ftl/ftl_io.o 00:03:12.339 CC lib/nvmf/transport.o 00:03:12.339 CC lib/scsi/scsi_rpc.o 00:03:12.339 CC lib/scsi/task.o 00:03:12.339 CC lib/ftl/ftl_sb.o 00:03:12.339 CC lib/nvmf/tcp.o 00:03:12.339 CC lib/nvmf/stubs.o 00:03:12.339 CC lib/ftl/ftl_l2p.o 00:03:12.339 CC lib/ftl/ftl_l2p_flat.o 00:03:12.339 CC lib/nvmf/mdns_server.o 00:03:12.339 CC lib/nvmf/vfio_user.o 00:03:12.339 CC lib/ftl/ftl_nv_cache.o 00:03:12.339 CC lib/ftl/ftl_band.o 00:03:12.339 CC lib/ftl/ftl_band_ops.o 00:03:12.339 CC lib/nvmf/rdma.o 00:03:12.339 CC lib/nvmf/auth.o 00:03:12.339 CC lib/ftl/ftl_writer.o 00:03:12.339 CC lib/ftl/ftl_rq.o 00:03:12.339 CC lib/ftl/ftl_reloc.o 00:03:12.339 CC lib/ftl/ftl_l2p_cache.o 00:03:12.339 CC lib/ftl/ftl_p2l.o 00:03:12.339 CC lib/ftl/mngt/ftl_mngt.o 00:03:12.339 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:12.339 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:12.339 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:12.339 LIB libspdk_lvol.a 00:03:12.339 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:12.339 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:12.599 SO libspdk_lvol.so.10.0 00:03:12.599 SYMLINK libspdk_lvol.so 00:03:12.599 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:12.860 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:12.860 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:12.860 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:12.860 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:12.860 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:12.860 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:12.860 CC lib/ftl/utils/ftl_conf.o 00:03:12.860 CC lib/ftl/utils/ftl_md.o 00:03:12.860 CC lib/ftl/utils/ftl_mempool.o 00:03:12.860 CC lib/ftl/utils/ftl_bitmap.o 00:03:12.860 CC lib/ftl/utils/ftl_property.o 00:03:12.860 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:12.860 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:12.860 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:12.860 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:12.860 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:12.860 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:12.860 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:13.118 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:13.118 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:13.118 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:13.118 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:13.118 CC lib/ftl/base/ftl_base_dev.o 00:03:13.118 CC lib/ftl/base/ftl_base_bdev.o 00:03:13.118 CC lib/ftl/ftl_trace.o 00:03:13.118 LIB libspdk_nbd.a 00:03:13.376 SO libspdk_nbd.so.7.0 00:03:13.376 LIB libspdk_scsi.a 00:03:13.376 SYMLINK libspdk_nbd.so 00:03:13.376 SO libspdk_scsi.so.9.0 00:03:13.376 LIB libspdk_ublk.a 00:03:13.376 SO libspdk_ublk.so.3.0 00:03:13.376 SYMLINK libspdk_scsi.so 00:03:13.641 SYMLINK libspdk_ublk.so 00:03:13.641 CC lib/vhost/vhost.o 00:03:13.641 CC lib/iscsi/conn.o 00:03:13.641 CC lib/vhost/vhost_rpc.o 00:03:13.641 CC lib/vhost/vhost_scsi.o 00:03:13.641 CC lib/iscsi/init_grp.o 00:03:13.641 CC lib/iscsi/iscsi.o 00:03:13.641 CC lib/vhost/vhost_blk.o 00:03:13.641 CC lib/vhost/rte_vhost_user.o 00:03:13.641 CC lib/iscsi/md5.o 00:03:13.641 CC lib/iscsi/param.o 00:03:13.641 CC lib/iscsi/portal_grp.o 00:03:13.641 CC lib/iscsi/tgt_node.o 00:03:13.641 CC lib/iscsi/iscsi_subsystem.o 00:03:13.641 CC lib/iscsi/iscsi_rpc.o 00:03:13.641 CC lib/iscsi/task.o 00:03:13.911 LIB libspdk_ftl.a 00:03:14.170 SO libspdk_ftl.so.9.0 00:03:14.428 SYMLINK libspdk_ftl.so 00:03:14.995 LIB libspdk_vhost.a 00:03:14.995 SO libspdk_vhost.so.8.0 00:03:14.995 LIB libspdk_nvmf.a 00:03:14.995 SYMLINK libspdk_vhost.so 00:03:14.995 SO libspdk_nvmf.so.18.0 00:03:14.995 LIB libspdk_iscsi.a 00:03:15.253 SO libspdk_iscsi.so.8.0 00:03:15.253 SYMLINK libspdk_nvmf.so 00:03:15.253 SYMLINK libspdk_iscsi.so 00:03:15.511 CC module/env_dpdk/env_dpdk_rpc.o 00:03:15.511 CC module/vfu_device/vfu_virtio.o 00:03:15.511 CC module/vfu_device/vfu_virtio_blk.o 00:03:15.511 CC module/vfu_device/vfu_virtio_scsi.o 00:03:15.511 CC module/vfu_device/vfu_virtio_rpc.o 00:03:15.511 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:15.511 CC module/accel/iaa/accel_iaa.o 00:03:15.511 CC module/keyring/file/keyring.o 00:03:15.511 CC module/sock/posix/posix.o 00:03:15.769 CC module/accel/ioat/accel_ioat.o 00:03:15.769 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:15.769 CC module/blob/bdev/blob_bdev.o 00:03:15.769 CC module/accel/iaa/accel_iaa_rpc.o 00:03:15.769 CC module/accel/ioat/accel_ioat_rpc.o 00:03:15.769 CC module/keyring/file/keyring_rpc.o 00:03:15.769 CC module/accel/dsa/accel_dsa.o 00:03:15.769 CC module/accel/error/accel_error.o 00:03:15.769 CC module/scheduler/gscheduler/gscheduler.o 00:03:15.769 CC module/accel/dsa/accel_dsa_rpc.o 00:03:15.769 CC module/keyring/linux/keyring.o 00:03:15.769 CC module/accel/error/accel_error_rpc.o 00:03:15.769 CC module/keyring/linux/keyring_rpc.o 00:03:15.769 LIB libspdk_env_dpdk_rpc.a 00:03:15.769 SO libspdk_env_dpdk_rpc.so.6.0 00:03:15.769 SYMLINK libspdk_env_dpdk_rpc.so 00:03:15.769 LIB libspdk_keyring_file.a 00:03:15.769 LIB libspdk_keyring_linux.a 00:03:15.769 LIB libspdk_scheduler_dpdk_governor.a 00:03:15.769 LIB libspdk_scheduler_gscheduler.a 00:03:15.769 SO libspdk_keyring_linux.so.1.0 00:03:15.769 SO libspdk_keyring_file.so.1.0 00:03:15.769 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:15.769 SO libspdk_scheduler_gscheduler.so.4.0 00:03:15.769 LIB libspdk_accel_error.a 00:03:15.769 LIB libspdk_scheduler_dynamic.a 00:03:15.769 LIB libspdk_accel_ioat.a 00:03:15.769 SO libspdk_accel_error.so.2.0 00:03:15.769 LIB libspdk_accel_iaa.a 00:03:16.027 SO libspdk_scheduler_dynamic.so.4.0 00:03:16.027 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:16.027 SO libspdk_accel_ioat.so.6.0 00:03:16.027 SYMLINK libspdk_scheduler_gscheduler.so 00:03:16.027 SYMLINK libspdk_keyring_file.so 00:03:16.027 SYMLINK libspdk_keyring_linux.so 00:03:16.027 SO libspdk_accel_iaa.so.3.0 00:03:16.027 SYMLINK libspdk_accel_error.so 00:03:16.027 LIB libspdk_accel_dsa.a 00:03:16.027 SYMLINK libspdk_scheduler_dynamic.so 00:03:16.027 LIB libspdk_blob_bdev.a 00:03:16.027 SYMLINK libspdk_accel_ioat.so 00:03:16.027 SO libspdk_accel_dsa.so.5.0 00:03:16.027 SO libspdk_blob_bdev.so.11.0 00:03:16.027 SYMLINK libspdk_accel_iaa.so 00:03:16.027 SYMLINK libspdk_accel_dsa.so 00:03:16.027 SYMLINK libspdk_blob_bdev.so 00:03:16.286 LIB libspdk_vfu_device.a 00:03:16.286 CC module/blobfs/bdev/blobfs_bdev.o 00:03:16.286 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:16.286 SO libspdk_vfu_device.so.3.0 00:03:16.286 CC module/bdev/null/bdev_null.o 00:03:16.286 CC module/bdev/delay/vbdev_delay.o 00:03:16.286 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:16.286 CC module/bdev/null/bdev_null_rpc.o 00:03:16.286 CC module/bdev/lvol/vbdev_lvol.o 00:03:16.286 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:16.286 CC module/bdev/error/vbdev_error.o 00:03:16.286 CC module/bdev/ftl/bdev_ftl.o 00:03:16.286 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:16.286 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:16.286 CC module/bdev/raid/bdev_raid.o 00:03:16.286 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:16.286 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:16.286 CC module/bdev/gpt/gpt.o 00:03:16.286 CC module/bdev/raid/bdev_raid_rpc.o 00:03:16.286 CC module/bdev/split/vbdev_split.o 00:03:16.286 CC module/bdev/error/vbdev_error_rpc.o 00:03:16.286 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:16.286 CC module/bdev/aio/bdev_aio.o 00:03:16.286 CC module/bdev/split/vbdev_split_rpc.o 00:03:16.286 CC module/bdev/malloc/bdev_malloc.o 00:03:16.286 CC module/bdev/passthru/vbdev_passthru.o 00:03:16.286 CC module/bdev/gpt/vbdev_gpt.o 00:03:16.286 CC module/bdev/raid/bdev_raid_sb.o 00:03:16.286 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:16.286 CC module/bdev/raid/raid0.o 00:03:16.286 CC module/bdev/nvme/bdev_nvme.o 00:03:16.286 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:16.286 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:16.286 CC module/bdev/aio/bdev_aio_rpc.o 00:03:16.286 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:16.286 CC module/bdev/iscsi/bdev_iscsi.o 00:03:16.286 CC module/bdev/raid/raid1.o 00:03:16.286 CC module/bdev/nvme/nvme_rpc.o 00:03:16.286 CC module/bdev/raid/concat.o 00:03:16.286 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:16.286 CC module/bdev/nvme/bdev_mdns_client.o 00:03:16.286 CC module/bdev/nvme/vbdev_opal.o 00:03:16.286 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:16.286 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:16.544 SYMLINK libspdk_vfu_device.so 00:03:16.544 LIB libspdk_sock_posix.a 00:03:16.544 SO libspdk_sock_posix.so.6.0 00:03:16.544 LIB libspdk_blobfs_bdev.a 00:03:16.803 SO libspdk_blobfs_bdev.so.6.0 00:03:16.803 LIB libspdk_bdev_split.a 00:03:16.803 SYMLINK libspdk_sock_posix.so 00:03:16.803 SYMLINK libspdk_blobfs_bdev.so 00:03:16.803 SO libspdk_bdev_split.so.6.0 00:03:16.803 LIB libspdk_bdev_malloc.a 00:03:16.803 LIB libspdk_bdev_error.a 00:03:16.803 SYMLINK libspdk_bdev_split.so 00:03:16.803 LIB libspdk_bdev_null.a 00:03:16.803 LIB libspdk_bdev_gpt.a 00:03:16.803 SO libspdk_bdev_error.so.6.0 00:03:16.803 SO libspdk_bdev_malloc.so.6.0 00:03:16.803 SO libspdk_bdev_null.so.6.0 00:03:16.803 LIB libspdk_bdev_ftl.a 00:03:16.803 LIB libspdk_bdev_iscsi.a 00:03:16.803 LIB libspdk_bdev_aio.a 00:03:16.803 SO libspdk_bdev_gpt.so.6.0 00:03:16.803 LIB libspdk_bdev_passthru.a 00:03:16.803 SO libspdk_bdev_ftl.so.6.0 00:03:16.803 SO libspdk_bdev_iscsi.so.6.0 00:03:16.803 SO libspdk_bdev_aio.so.6.0 00:03:16.803 LIB libspdk_bdev_delay.a 00:03:16.803 SO libspdk_bdev_passthru.so.6.0 00:03:16.803 SYMLINK libspdk_bdev_error.so 00:03:16.803 SYMLINK libspdk_bdev_malloc.so 00:03:16.803 SYMLINK libspdk_bdev_null.so 00:03:16.803 LIB libspdk_bdev_zone_block.a 00:03:16.803 SYMLINK libspdk_bdev_gpt.so 00:03:16.803 SO libspdk_bdev_delay.so.6.0 00:03:16.803 SO libspdk_bdev_zone_block.so.6.0 00:03:17.061 SYMLINK libspdk_bdev_iscsi.so 00:03:17.061 SYMLINK libspdk_bdev_ftl.so 00:03:17.061 SYMLINK libspdk_bdev_aio.so 00:03:17.061 SYMLINK libspdk_bdev_passthru.so 00:03:17.061 SYMLINK libspdk_bdev_delay.so 00:03:17.061 SYMLINK libspdk_bdev_zone_block.so 00:03:17.061 LIB libspdk_bdev_virtio.a 00:03:17.061 LIB libspdk_bdev_lvol.a 00:03:17.061 SO libspdk_bdev_lvol.so.6.0 00:03:17.061 SO libspdk_bdev_virtio.so.6.0 00:03:17.061 SYMLINK libspdk_bdev_lvol.so 00:03:17.061 SYMLINK libspdk_bdev_virtio.so 00:03:17.319 LIB libspdk_bdev_raid.a 00:03:17.577 SO libspdk_bdev_raid.so.6.0 00:03:17.577 SYMLINK libspdk_bdev_raid.so 00:03:18.950 LIB libspdk_bdev_nvme.a 00:03:18.950 SO libspdk_bdev_nvme.so.7.0 00:03:18.950 SYMLINK libspdk_bdev_nvme.so 00:03:19.209 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:19.209 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:19.209 CC module/event/subsystems/iobuf/iobuf.o 00:03:19.209 CC module/event/subsystems/keyring/keyring.o 00:03:19.209 CC module/event/subsystems/sock/sock.o 00:03:19.209 CC module/event/subsystems/vmd/vmd.o 00:03:19.209 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:19.209 CC module/event/subsystems/scheduler/scheduler.o 00:03:19.209 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:19.209 LIB libspdk_event_keyring.a 00:03:19.209 LIB libspdk_event_vhost_blk.a 00:03:19.209 LIB libspdk_event_sock.a 00:03:19.209 LIB libspdk_event_vfu_tgt.a 00:03:19.209 LIB libspdk_event_scheduler.a 00:03:19.209 LIB libspdk_event_vmd.a 00:03:19.209 LIB libspdk_event_iobuf.a 00:03:19.209 SO libspdk_event_sock.so.5.0 00:03:19.209 SO libspdk_event_keyring.so.1.0 00:03:19.209 SO libspdk_event_vhost_blk.so.3.0 00:03:19.209 SO libspdk_event_vfu_tgt.so.3.0 00:03:19.209 SO libspdk_event_scheduler.so.4.0 00:03:19.209 SO libspdk_event_vmd.so.6.0 00:03:19.467 SO libspdk_event_iobuf.so.3.0 00:03:19.467 SYMLINK libspdk_event_sock.so 00:03:19.467 SYMLINK libspdk_event_vhost_blk.so 00:03:19.467 SYMLINK libspdk_event_keyring.so 00:03:19.467 SYMLINK libspdk_event_scheduler.so 00:03:19.467 SYMLINK libspdk_event_vfu_tgt.so 00:03:19.467 SYMLINK libspdk_event_vmd.so 00:03:19.467 SYMLINK libspdk_event_iobuf.so 00:03:19.467 CC module/event/subsystems/accel/accel.o 00:03:19.725 LIB libspdk_event_accel.a 00:03:19.725 SO libspdk_event_accel.so.6.0 00:03:19.725 SYMLINK libspdk_event_accel.so 00:03:19.985 CC module/event/subsystems/bdev/bdev.o 00:03:20.246 LIB libspdk_event_bdev.a 00:03:20.246 SO libspdk_event_bdev.so.6.0 00:03:20.246 SYMLINK libspdk_event_bdev.so 00:03:20.246 CC module/event/subsystems/ublk/ublk.o 00:03:20.246 CC module/event/subsystems/nbd/nbd.o 00:03:20.246 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:20.246 CC module/event/subsystems/scsi/scsi.o 00:03:20.246 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:20.504 LIB libspdk_event_nbd.a 00:03:20.504 LIB libspdk_event_ublk.a 00:03:20.504 LIB libspdk_event_scsi.a 00:03:20.504 SO libspdk_event_ublk.so.3.0 00:03:20.504 SO libspdk_event_nbd.so.6.0 00:03:20.504 SO libspdk_event_scsi.so.6.0 00:03:20.504 SYMLINK libspdk_event_ublk.so 00:03:20.504 SYMLINK libspdk_event_nbd.so 00:03:20.504 SYMLINK libspdk_event_scsi.so 00:03:20.504 LIB libspdk_event_nvmf.a 00:03:20.504 SO libspdk_event_nvmf.so.6.0 00:03:20.761 SYMLINK libspdk_event_nvmf.so 00:03:20.761 CC module/event/subsystems/iscsi/iscsi.o 00:03:20.761 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:21.018 LIB libspdk_event_vhost_scsi.a 00:03:21.018 LIB libspdk_event_iscsi.a 00:03:21.018 SO libspdk_event_vhost_scsi.so.3.0 00:03:21.018 SO libspdk_event_iscsi.so.6.0 00:03:21.018 SYMLINK libspdk_event_vhost_scsi.so 00:03:21.018 SYMLINK libspdk_event_iscsi.so 00:03:21.018 SO libspdk.so.6.0 00:03:21.018 SYMLINK libspdk.so 00:03:21.286 CC app/trace_record/trace_record.o 00:03:21.286 TEST_HEADER include/spdk/accel.h 00:03:21.286 TEST_HEADER include/spdk/accel_module.h 00:03:21.286 TEST_HEADER include/spdk/assert.h 00:03:21.286 TEST_HEADER include/spdk/barrier.h 00:03:21.286 TEST_HEADER include/spdk/base64.h 00:03:21.286 TEST_HEADER include/spdk/bdev.h 00:03:21.286 TEST_HEADER include/spdk/bdev_module.h 00:03:21.286 TEST_HEADER include/spdk/bdev_zone.h 00:03:21.286 TEST_HEADER include/spdk/bit_array.h 00:03:21.286 CC app/spdk_nvme_perf/perf.o 00:03:21.286 TEST_HEADER include/spdk/bit_pool.h 00:03:21.286 CC app/spdk_lspci/spdk_lspci.o 00:03:21.286 CXX app/trace/trace.o 00:03:21.286 CC app/spdk_top/spdk_top.o 00:03:21.286 TEST_HEADER include/spdk/blob_bdev.h 00:03:21.286 CC app/spdk_nvme_discover/discovery_aer.o 00:03:21.286 CC app/spdk_nvme_identify/identify.o 00:03:21.286 CC test/rpc_client/rpc_client_test.o 00:03:21.286 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:21.286 TEST_HEADER include/spdk/blobfs.h 00:03:21.286 TEST_HEADER include/spdk/blob.h 00:03:21.286 TEST_HEADER include/spdk/conf.h 00:03:21.286 TEST_HEADER include/spdk/config.h 00:03:21.286 TEST_HEADER include/spdk/cpuset.h 00:03:21.286 TEST_HEADER include/spdk/crc16.h 00:03:21.286 TEST_HEADER include/spdk/crc32.h 00:03:21.286 TEST_HEADER include/spdk/crc64.h 00:03:21.286 TEST_HEADER include/spdk/dif.h 00:03:21.286 TEST_HEADER include/spdk/dma.h 00:03:21.286 TEST_HEADER include/spdk/endian.h 00:03:21.286 TEST_HEADER include/spdk/env_dpdk.h 00:03:21.286 TEST_HEADER include/spdk/env.h 00:03:21.286 TEST_HEADER include/spdk/event.h 00:03:21.286 TEST_HEADER include/spdk/fd_group.h 00:03:21.286 TEST_HEADER include/spdk/fd.h 00:03:21.286 TEST_HEADER include/spdk/file.h 00:03:21.286 TEST_HEADER include/spdk/ftl.h 00:03:21.286 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:21.286 TEST_HEADER include/spdk/gpt_spec.h 00:03:21.286 CC app/iscsi_tgt/iscsi_tgt.o 00:03:21.551 CC app/spdk_dd/spdk_dd.o 00:03:21.551 TEST_HEADER include/spdk/hexlify.h 00:03:21.551 TEST_HEADER include/spdk/histogram_data.h 00:03:21.551 CC app/nvmf_tgt/nvmf_main.o 00:03:21.551 TEST_HEADER include/spdk/idxd.h 00:03:21.551 TEST_HEADER include/spdk/idxd_spec.h 00:03:21.551 TEST_HEADER include/spdk/init.h 00:03:21.551 TEST_HEADER include/spdk/ioat.h 00:03:21.551 TEST_HEADER include/spdk/ioat_spec.h 00:03:21.551 CC app/vhost/vhost.o 00:03:21.551 TEST_HEADER include/spdk/iscsi_spec.h 00:03:21.551 TEST_HEADER include/spdk/json.h 00:03:21.551 TEST_HEADER include/spdk/jsonrpc.h 00:03:21.551 TEST_HEADER include/spdk/keyring.h 00:03:21.551 TEST_HEADER include/spdk/keyring_module.h 00:03:21.551 TEST_HEADER include/spdk/likely.h 00:03:21.551 CC app/spdk_tgt/spdk_tgt.o 00:03:21.551 TEST_HEADER include/spdk/log.h 00:03:21.551 TEST_HEADER include/spdk/lvol.h 00:03:21.551 TEST_HEADER include/spdk/memory.h 00:03:21.551 CC test/app/jsoncat/jsoncat.o 00:03:21.551 CC examples/accel/perf/accel_perf.o 00:03:21.551 CC examples/idxd/perf/perf.o 00:03:21.552 TEST_HEADER include/spdk/mmio.h 00:03:21.552 CC test/env/vtophys/vtophys.o 00:03:21.552 CC examples/vmd/led/led.o 00:03:21.552 TEST_HEADER include/spdk/nbd.h 00:03:21.552 CC test/thread/poller_perf/poller_perf.o 00:03:21.552 TEST_HEADER include/spdk/notify.h 00:03:21.552 CC examples/ioat/perf/perf.o 00:03:21.552 CC examples/sock/hello_world/hello_sock.o 00:03:21.552 CC test/app/stub/stub.o 00:03:21.552 CC test/app/histogram_perf/histogram_perf.o 00:03:21.552 CC examples/nvme/reconnect/reconnect.o 00:03:21.552 CC examples/vmd/lsvmd/lsvmd.o 00:03:21.552 CC examples/nvme/hello_world/hello_world.o 00:03:21.552 CC app/fio/nvme/fio_plugin.o 00:03:21.552 TEST_HEADER include/spdk/nvme.h 00:03:21.552 TEST_HEADER include/spdk/nvme_intel.h 00:03:21.552 CC test/nvme/aer/aer.o 00:03:21.552 CC test/nvme/reset/reset.o 00:03:21.552 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:21.552 CC examples/util/zipf/zipf.o 00:03:21.552 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:21.552 TEST_HEADER include/spdk/nvme_spec.h 00:03:21.552 CC test/event/event_perf/event_perf.o 00:03:21.552 TEST_HEADER include/spdk/nvme_zns.h 00:03:21.552 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:21.552 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:21.552 TEST_HEADER include/spdk/nvmf.h 00:03:21.552 TEST_HEADER include/spdk/nvmf_spec.h 00:03:21.552 TEST_HEADER include/spdk/nvmf_transport.h 00:03:21.552 TEST_HEADER include/spdk/opal.h 00:03:21.552 CC test/accel/dif/dif.o 00:03:21.552 TEST_HEADER include/spdk/opal_spec.h 00:03:21.552 TEST_HEADER include/spdk/pci_ids.h 00:03:21.552 CC examples/bdev/hello_world/hello_bdev.o 00:03:21.552 TEST_HEADER include/spdk/pipe.h 00:03:21.552 TEST_HEADER include/spdk/queue.h 00:03:21.552 TEST_HEADER include/spdk/reduce.h 00:03:21.552 CC examples/blob/cli/blobcli.o 00:03:21.552 CC test/bdev/bdevio/bdevio.o 00:03:21.552 TEST_HEADER include/spdk/rpc.h 00:03:21.552 TEST_HEADER include/spdk/scheduler.h 00:03:21.552 CC app/fio/bdev/fio_plugin.o 00:03:21.552 CC examples/blob/hello_world/hello_blob.o 00:03:21.552 TEST_HEADER include/spdk/scsi.h 00:03:21.552 CC examples/thread/thread/thread_ex.o 00:03:21.552 CC examples/nvmf/nvmf/nvmf.o 00:03:21.552 CC test/dma/test_dma/test_dma.o 00:03:21.552 TEST_HEADER include/spdk/scsi_spec.h 00:03:21.552 CC examples/bdev/bdevperf/bdevperf.o 00:03:21.552 CC test/blobfs/mkfs/mkfs.o 00:03:21.552 TEST_HEADER include/spdk/sock.h 00:03:21.552 CC test/app/bdev_svc/bdev_svc.o 00:03:21.552 TEST_HEADER include/spdk/stdinc.h 00:03:21.552 TEST_HEADER include/spdk/string.h 00:03:21.552 TEST_HEADER include/spdk/thread.h 00:03:21.552 TEST_HEADER include/spdk/trace.h 00:03:21.552 TEST_HEADER include/spdk/trace_parser.h 00:03:21.552 TEST_HEADER include/spdk/tree.h 00:03:21.552 TEST_HEADER include/spdk/ublk.h 00:03:21.552 TEST_HEADER include/spdk/util.h 00:03:21.552 TEST_HEADER include/spdk/uuid.h 00:03:21.552 TEST_HEADER include/spdk/version.h 00:03:21.552 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:21.552 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:21.552 TEST_HEADER include/spdk/vhost.h 00:03:21.552 TEST_HEADER include/spdk/vmd.h 00:03:21.552 TEST_HEADER include/spdk/xor.h 00:03:21.552 TEST_HEADER include/spdk/zipf.h 00:03:21.552 CXX test/cpp_headers/accel.o 00:03:21.552 CC test/env/mem_callbacks/mem_callbacks.o 00:03:21.552 LINK spdk_lspci 00:03:21.552 CC test/lvol/esnap/esnap.o 00:03:21.552 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:21.811 LINK rpc_client_test 00:03:21.811 LINK spdk_nvme_discover 00:03:21.811 LINK lsvmd 00:03:21.811 LINK interrupt_tgt 00:03:21.811 LINK jsoncat 00:03:21.811 LINK led 00:03:21.811 LINK histogram_perf 00:03:21.811 LINK poller_perf 00:03:21.811 LINK nvmf_tgt 00:03:21.811 LINK vtophys 00:03:21.811 LINK event_perf 00:03:21.811 LINK zipf 00:03:21.811 LINK spdk_trace_record 00:03:21.811 LINK iscsi_tgt 00:03:21.811 LINK vhost 00:03:21.811 LINK stub 00:03:21.811 LINK spdk_tgt 00:03:21.811 LINK ioat_perf 00:03:22.076 LINK bdev_svc 00:03:22.076 LINK hello_world 00:03:22.076 LINK hello_sock 00:03:22.076 LINK mkfs 00:03:22.076 CXX test/cpp_headers/accel_module.o 00:03:22.076 LINK reset 00:03:22.076 LINK hello_blob 00:03:22.076 LINK hello_bdev 00:03:22.076 LINK aer 00:03:22.076 LINK thread 00:03:22.076 LINK idxd_perf 00:03:22.076 CC examples/ioat/verify/verify.o 00:03:22.076 CXX test/cpp_headers/assert.o 00:03:22.076 LINK spdk_dd 00:03:22.076 CXX test/cpp_headers/barrier.o 00:03:22.076 LINK reconnect 00:03:22.076 LINK nvmf 00:03:22.076 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:22.076 LINK spdk_trace 00:03:22.337 CC examples/nvme/arbitration/arbitration.o 00:03:22.337 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:22.337 CC test/event/reactor/reactor.o 00:03:22.337 CXX test/cpp_headers/base64.o 00:03:22.337 CXX test/cpp_headers/bdev.o 00:03:22.337 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:22.337 CC test/env/pci/pci_ut.o 00:03:22.337 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:22.337 LINK bdevio 00:03:22.337 CC examples/nvme/hotplug/hotplug.o 00:03:22.337 LINK test_dma 00:03:22.337 CC test/env/memory/memory_ut.o 00:03:22.337 CC test/nvme/sgl/sgl.o 00:03:22.337 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:22.337 CC test/event/reactor_perf/reactor_perf.o 00:03:22.337 CXX test/cpp_headers/bdev_module.o 00:03:22.337 LINK dif 00:03:22.337 CXX test/cpp_headers/bdev_zone.o 00:03:22.337 CC test/nvme/e2edp/nvme_dp.o 00:03:22.337 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:22.337 CC test/event/app_repeat/app_repeat.o 00:03:22.337 LINK accel_perf 00:03:22.337 CXX test/cpp_headers/bit_array.o 00:03:22.600 LINK nvme_fuzz 00:03:22.600 CC test/nvme/overhead/overhead.o 00:03:22.600 LINK spdk_bdev 00:03:22.600 LINK blobcli 00:03:22.600 CC test/nvme/err_injection/err_injection.o 00:03:22.600 CC examples/nvme/abort/abort.o 00:03:22.600 LINK verify 00:03:22.600 CC test/event/scheduler/scheduler.o 00:03:22.600 LINK reactor 00:03:22.600 LINK spdk_nvme 00:03:22.600 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:22.600 CC test/nvme/startup/startup.o 00:03:22.600 LINK env_dpdk_post_init 00:03:22.600 CXX test/cpp_headers/bit_pool.o 00:03:22.600 CXX test/cpp_headers/blob_bdev.o 00:03:22.600 CC test/nvme/reserve/reserve.o 00:03:22.600 CC test/nvme/simple_copy/simple_copy.o 00:03:22.600 CC test/nvme/connect_stress/connect_stress.o 00:03:22.600 CC test/nvme/boot_partition/boot_partition.o 00:03:22.600 LINK reactor_perf 00:03:22.600 CC test/nvme/compliance/nvme_compliance.o 00:03:22.861 CXX test/cpp_headers/blobfs_bdev.o 00:03:22.861 CXX test/cpp_headers/blobfs.o 00:03:22.861 LINK hotplug 00:03:22.861 CXX test/cpp_headers/blob.o 00:03:22.861 LINK app_repeat 00:03:22.861 CXX test/cpp_headers/conf.o 00:03:22.861 CC test/nvme/fused_ordering/fused_ordering.o 00:03:22.861 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:22.861 CXX test/cpp_headers/config.o 00:03:22.861 LINK cmb_copy 00:03:22.861 CXX test/cpp_headers/cpuset.o 00:03:22.861 CXX test/cpp_headers/crc16.o 00:03:22.861 CXX test/cpp_headers/crc32.o 00:03:22.861 CXX test/cpp_headers/crc64.o 00:03:22.861 CXX test/cpp_headers/dif.o 00:03:22.861 CXX test/cpp_headers/dma.o 00:03:22.861 CC test/nvme/fdp/fdp.o 00:03:22.861 CXX test/cpp_headers/endian.o 00:03:22.861 LINK mem_callbacks 00:03:22.861 LINK spdk_nvme_perf 00:03:22.861 CC test/nvme/cuse/cuse.o 00:03:22.861 LINK sgl 00:03:22.861 LINK arbitration 00:03:22.861 CXX test/cpp_headers/env_dpdk.o 00:03:22.861 LINK spdk_nvme_identify 00:03:22.861 CXX test/cpp_headers/env.o 00:03:22.861 LINK startup 00:03:22.861 CXX test/cpp_headers/event.o 00:03:22.861 LINK err_injection 00:03:22.861 LINK pmr_persistence 00:03:23.119 LINK nvme_dp 00:03:23.119 CXX test/cpp_headers/fd_group.o 00:03:23.119 CXX test/cpp_headers/fd.o 00:03:23.119 LINK scheduler 00:03:23.119 LINK boot_partition 00:03:23.119 LINK connect_stress 00:03:23.119 LINK pci_ut 00:03:23.119 LINK reserve 00:03:23.119 LINK spdk_top 00:03:23.119 LINK overhead 00:03:23.119 CXX test/cpp_headers/file.o 00:03:23.119 LINK bdevperf 00:03:23.119 CXX test/cpp_headers/ftl.o 00:03:23.119 LINK simple_copy 00:03:23.119 CXX test/cpp_headers/gpt_spec.o 00:03:23.119 CXX test/cpp_headers/hexlify.o 00:03:23.119 LINK nvme_manage 00:03:23.119 LINK vhost_fuzz 00:03:23.119 CXX test/cpp_headers/histogram_data.o 00:03:23.119 CXX test/cpp_headers/idxd.o 00:03:23.119 CXX test/cpp_headers/idxd_spec.o 00:03:23.119 CXX test/cpp_headers/init.o 00:03:23.119 CXX test/cpp_headers/ioat.o 00:03:23.119 CXX test/cpp_headers/ioat_spec.o 00:03:23.119 CXX test/cpp_headers/iscsi_spec.o 00:03:23.119 CXX test/cpp_headers/json.o 00:03:23.119 CXX test/cpp_headers/jsonrpc.o 00:03:23.119 LINK fused_ordering 00:03:23.119 LINK doorbell_aers 00:03:23.119 CXX test/cpp_headers/keyring.o 00:03:23.119 CXX test/cpp_headers/keyring_module.o 00:03:23.119 CXX test/cpp_headers/likely.o 00:03:23.119 CXX test/cpp_headers/log.o 00:03:23.120 CXX test/cpp_headers/lvol.o 00:03:23.380 CXX test/cpp_headers/memory.o 00:03:23.380 CXX test/cpp_headers/mmio.o 00:03:23.380 LINK abort 00:03:23.380 CXX test/cpp_headers/nbd.o 00:03:23.380 CXX test/cpp_headers/notify.o 00:03:23.380 CXX test/cpp_headers/nvme.o 00:03:23.380 CXX test/cpp_headers/nvme_intel.o 00:03:23.380 CXX test/cpp_headers/nvme_ocssd.o 00:03:23.380 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:23.380 CXX test/cpp_headers/nvme_spec.o 00:03:23.380 LINK nvme_compliance 00:03:23.380 CXX test/cpp_headers/nvme_zns.o 00:03:23.380 CXX test/cpp_headers/nvmf_cmd.o 00:03:23.380 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:23.380 CXX test/cpp_headers/nvmf.o 00:03:23.380 CXX test/cpp_headers/nvmf_spec.o 00:03:23.380 CXX test/cpp_headers/nvmf_transport.o 00:03:23.380 CXX test/cpp_headers/opal.o 00:03:23.380 CXX test/cpp_headers/opal_spec.o 00:03:23.380 CXX test/cpp_headers/pci_ids.o 00:03:23.380 CXX test/cpp_headers/pipe.o 00:03:23.380 CXX test/cpp_headers/queue.o 00:03:23.380 CXX test/cpp_headers/reduce.o 00:03:23.380 CXX test/cpp_headers/rpc.o 00:03:23.638 CXX test/cpp_headers/scheduler.o 00:03:23.638 CXX test/cpp_headers/scsi.o 00:03:23.638 CXX test/cpp_headers/scsi_spec.o 00:03:23.638 CXX test/cpp_headers/sock.o 00:03:23.638 LINK fdp 00:03:23.638 CXX test/cpp_headers/stdinc.o 00:03:23.638 CXX test/cpp_headers/string.o 00:03:23.638 CXX test/cpp_headers/thread.o 00:03:23.638 CXX test/cpp_headers/trace.o 00:03:23.638 CXX test/cpp_headers/trace_parser.o 00:03:23.638 CXX test/cpp_headers/tree.o 00:03:23.638 CXX test/cpp_headers/ublk.o 00:03:23.638 CXX test/cpp_headers/util.o 00:03:23.638 CXX test/cpp_headers/uuid.o 00:03:23.638 CXX test/cpp_headers/version.o 00:03:23.638 CXX test/cpp_headers/vfio_user_pci.o 00:03:23.638 CXX test/cpp_headers/vfio_user_spec.o 00:03:23.638 CXX test/cpp_headers/vhost.o 00:03:23.638 CXX test/cpp_headers/vmd.o 00:03:23.638 CXX test/cpp_headers/xor.o 00:03:23.638 CXX test/cpp_headers/zipf.o 00:03:24.202 LINK memory_ut 00:03:24.773 LINK cuse 00:03:24.773 LINK iscsi_fuzz 00:03:27.365 LINK esnap 00:03:27.932 00:03:27.932 real 0m40.349s 00:03:27.932 user 7m33.347s 00:03:27.932 sys 1m49.117s 00:03:27.932 18:33:38 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:27.932 18:33:38 make -- common/autotest_common.sh@10 -- $ set +x 00:03:27.932 ************************************ 00:03:27.932 END TEST make 00:03:27.932 ************************************ 00:03:27.932 18:33:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:27.932 18:33:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:27.932 18:33:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:27.932 18:33:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.932 18:33:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:27.932 18:33:38 -- pm/common@44 -- $ pid=1148005 00:03:27.932 18:33:38 -- pm/common@50 -- $ kill -TERM 1148005 00:03:27.932 18:33:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.932 18:33:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:27.932 18:33:38 -- pm/common@44 -- $ pid=1148007 00:03:27.932 18:33:38 -- pm/common@50 -- $ kill -TERM 1148007 00:03:27.932 18:33:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.932 18:33:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:27.932 18:33:38 -- pm/common@44 -- $ pid=1148009 00:03:27.932 18:33:38 -- pm/common@50 -- $ kill -TERM 1148009 00:03:27.932 18:33:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.932 18:33:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:27.932 18:33:38 -- pm/common@44 -- $ pid=1148037 00:03:27.932 18:33:38 -- pm/common@50 -- $ sudo -E kill -TERM 1148037 00:03:27.932 18:33:38 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:27.932 18:33:38 -- nvmf/common.sh@7 -- # uname -s 00:03:27.932 18:33:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:27.932 18:33:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:27.932 18:33:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:27.932 18:33:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:27.932 18:33:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:27.932 18:33:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:27.932 18:33:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:27.932 18:33:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:27.932 18:33:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:27.932 18:33:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:27.932 18:33:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:27.932 18:33:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:27.932 18:33:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:27.932 18:33:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:27.932 18:33:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:27.932 18:33:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:27.932 18:33:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:27.932 18:33:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:27.932 18:33:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:27.932 18:33:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:27.932 18:33:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.932 18:33:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.932 18:33:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.932 18:33:38 -- paths/export.sh@5 -- # export PATH 00:03:27.932 18:33:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.932 18:33:38 -- nvmf/common.sh@47 -- # : 0 00:03:27.932 18:33:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:27.932 18:33:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:27.932 18:33:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:27.932 18:33:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:27.932 18:33:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:27.932 18:33:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:27.932 18:33:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:27.932 18:33:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:27.932 18:33:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:27.932 18:33:38 -- spdk/autotest.sh@32 -- # uname -s 00:03:27.932 18:33:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:27.932 18:33:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:27.932 18:33:38 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:27.932 18:33:38 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:27.932 18:33:38 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:27.932 18:33:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:27.932 18:33:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:27.932 18:33:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:27.932 18:33:38 -- spdk/autotest.sh@48 -- # udevadm_pid=1224649 00:03:27.932 18:33:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:27.932 18:33:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:27.932 18:33:38 -- pm/common@17 -- # local monitor 00:03:27.932 18:33:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.932 18:33:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.932 18:33:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.932 18:33:38 -- pm/common@21 -- # date +%s 00:03:27.932 18:33:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.932 18:33:38 -- pm/common@21 -- # date +%s 00:03:27.932 18:33:38 -- pm/common@25 -- # sleep 1 00:03:27.932 18:33:38 -- pm/common@21 -- # date +%s 00:03:27.932 18:33:38 -- pm/common@21 -- # date +%s 00:03:27.932 18:33:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721493218 00:03:27.932 18:33:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721493218 00:03:27.932 18:33:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721493218 00:03:27.932 18:33:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721493218 00:03:27.932 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721493218_collect-vmstat.pm.log 00:03:27.932 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721493218_collect-cpu-load.pm.log 00:03:27.932 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721493218_collect-cpu-temp.pm.log 00:03:27.932 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721493218_collect-bmc-pm.bmc.pm.log 00:03:28.867 18:33:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:28.867 18:33:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:28.867 18:33:39 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:28.867 18:33:39 -- common/autotest_common.sh@10 -- # set +x 00:03:28.867 18:33:39 -- spdk/autotest.sh@59 -- # create_test_list 00:03:28.867 18:33:39 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:28.867 18:33:39 -- common/autotest_common.sh@10 -- # set +x 00:03:28.867 18:33:39 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:28.867 18:33:39 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:28.867 18:33:39 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:28.867 18:33:39 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:28.867 18:33:39 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:28.867 18:33:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:28.867 18:33:39 -- common/autotest_common.sh@1451 -- # uname 00:03:29.125 18:33:39 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:29.125 18:33:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:29.125 18:33:39 -- common/autotest_common.sh@1471 -- # uname 00:03:29.125 18:33:39 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:29.125 18:33:39 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:29.125 18:33:39 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:29.125 18:33:39 -- spdk/autotest.sh@72 -- # hash lcov 00:03:29.125 18:33:39 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:29.125 18:33:39 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:29.125 --rc lcov_branch_coverage=1 00:03:29.125 --rc lcov_function_coverage=1 00:03:29.125 --rc genhtml_branch_coverage=1 00:03:29.125 --rc genhtml_function_coverage=1 00:03:29.125 --rc genhtml_legend=1 00:03:29.125 --rc geninfo_all_blocks=1 00:03:29.125 ' 00:03:29.125 18:33:39 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:29.125 --rc lcov_branch_coverage=1 00:03:29.125 --rc lcov_function_coverage=1 00:03:29.125 --rc genhtml_branch_coverage=1 00:03:29.125 --rc genhtml_function_coverage=1 00:03:29.125 --rc genhtml_legend=1 00:03:29.125 --rc geninfo_all_blocks=1 00:03:29.125 ' 00:03:29.125 18:33:39 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:29.125 --rc lcov_branch_coverage=1 00:03:29.125 --rc lcov_function_coverage=1 00:03:29.125 --rc genhtml_branch_coverage=1 00:03:29.125 --rc genhtml_function_coverage=1 00:03:29.125 --rc genhtml_legend=1 00:03:29.125 --rc geninfo_all_blocks=1 00:03:29.125 --no-external' 00:03:29.125 18:33:39 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:29.125 --rc lcov_branch_coverage=1 00:03:29.125 --rc lcov_function_coverage=1 00:03:29.125 --rc genhtml_branch_coverage=1 00:03:29.125 --rc genhtml_function_coverage=1 00:03:29.125 --rc genhtml_legend=1 00:03:29.125 --rc geninfo_all_blocks=1 00:03:29.125 --no-external' 00:03:29.125 18:33:39 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:29.125 lcov: LCOV version 1.14 00:03:29.125 18:33:39 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:43.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:43.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:58.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:58.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:58.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:58.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:58.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:58.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:58.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:58.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:58.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:58.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:58.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:58.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:58.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:58.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:58.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:58.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:58.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:58.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:58.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:58.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:58.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:58.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:58.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:58.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:58.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:58.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:58.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:58.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:58.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:58.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:58.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:03.027 18:34:12 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:03.027 18:34:12 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:03.027 18:34:12 -- common/autotest_common.sh@10 -- # set +x 00:04:03.027 18:34:12 -- spdk/autotest.sh@91 -- # rm -f 00:04:03.027 18:34:12 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.972 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:03.972 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:03.972 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:03.972 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:03.972 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:03.972 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:03.972 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:03.972 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:03.972 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:03.972 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:03.972 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:03.972 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:03.972 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:03.972 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:03.972 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:03.972 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:03.972 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:04.230 18:34:14 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:04.230 18:34:14 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:04.230 18:34:14 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:04.230 18:34:14 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:04.230 18:34:14 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:04.230 18:34:14 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:04.230 18:34:14 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:04.230 18:34:14 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:04.230 18:34:14 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:04.230 18:34:14 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:04.230 18:34:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.230 18:34:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:04.230 18:34:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:04.230 18:34:14 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:04.230 18:34:14 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:04.230 No valid GPT data, bailing 00:04:04.230 18:34:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:04.230 18:34:14 -- scripts/common.sh@391 -- # pt= 00:04:04.230 18:34:14 -- scripts/common.sh@392 -- # return 1 00:04:04.230 18:34:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:04.230 1+0 records in 00:04:04.230 1+0 records out 00:04:04.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00234484 s, 447 MB/s 00:04:04.230 18:34:14 -- spdk/autotest.sh@118 -- # sync 00:04:04.230 18:34:14 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:04.230 18:34:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:04.230 18:34:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:06.128 18:34:16 -- spdk/autotest.sh@124 -- # uname -s 00:04:06.128 18:34:16 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:06.128 18:34:16 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:06.128 18:34:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:06.128 18:34:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.128 18:34:16 -- common/autotest_common.sh@10 -- # set +x 00:04:06.128 ************************************ 00:04:06.128 START TEST setup.sh 00:04:06.128 ************************************ 00:04:06.128 18:34:16 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:06.128 * Looking for test storage... 00:04:06.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:06.128 18:34:16 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:06.128 18:34:16 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:06.128 18:34:16 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:06.128 18:34:16 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:06.128 18:34:16 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.128 18:34:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.128 ************************************ 00:04:06.128 START TEST acl 00:04:06.128 ************************************ 00:04:06.128 18:34:16 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:06.128 * Looking for test storage... 00:04:06.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:06.128 18:34:16 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:06.128 18:34:16 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:06.128 18:34:16 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:06.128 18:34:16 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:06.128 18:34:16 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:06.128 18:34:16 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:06.128 18:34:16 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:06.128 18:34:16 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:06.128 18:34:16 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:06.128 18:34:16 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:06.128 18:34:16 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:06.128 18:34:16 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:06.128 18:34:16 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:06.128 18:34:16 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:06.128 18:34:16 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.128 18:34:16 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.497 18:34:17 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:07.497 18:34:17 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:07.497 18:34:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.497 18:34:17 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:07.497 18:34:17 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.497 18:34:17 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:08.428 Hugepages 00:04:08.428 node hugesize free / total 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.428 00:04:08.428 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.428 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.429 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.686 18:34:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:08.686 18:34:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:08.686 18:34:18 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:08.686 18:34:18 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:08.686 18:34:18 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:08.686 18:34:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.686 18:34:18 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:08.686 18:34:18 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:08.686 18:34:18 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:08.686 18:34:18 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:08.686 18:34:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:08.686 ************************************ 00:04:08.686 START TEST denied 00:04:08.686 ************************************ 00:04:08.686 18:34:18 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:08.686 18:34:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:08.686 18:34:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:08.686 18:34:18 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:08.686 18:34:18 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.686 18:34:18 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:10.059 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:10.059 18:34:20 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:10.059 18:34:20 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:10.059 18:34:20 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:10.059 18:34:20 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:10.059 18:34:20 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:10.059 18:34:20 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:10.059 18:34:20 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:10.059 18:34:20 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:10.059 18:34:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.059 18:34:20 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.588 00:04:12.588 real 0m3.573s 00:04:12.588 user 0m1.087s 00:04:12.588 sys 0m1.648s 00:04:12.588 18:34:22 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:12.588 18:34:22 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:12.588 ************************************ 00:04:12.588 END TEST denied 00:04:12.588 ************************************ 00:04:12.588 18:34:22 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:12.588 18:34:22 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:12.588 18:34:22 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:12.588 18:34:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:12.588 ************************************ 00:04:12.588 START TEST allowed 00:04:12.588 ************************************ 00:04:12.588 18:34:22 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:12.588 18:34:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:12.588 18:34:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:12.588 18:34:22 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:12.588 18:34:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.588 18:34:22 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:14.497 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:14.497 18:34:24 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:14.497 18:34:24 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:14.497 18:34:24 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:14.497 18:34:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.497 18:34:24 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.399 00:04:16.399 real 0m3.909s 00:04:16.399 user 0m1.074s 00:04:16.399 sys 0m1.723s 00:04:16.399 18:34:26 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:16.399 18:34:26 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:16.399 ************************************ 00:04:16.399 END TEST allowed 00:04:16.399 ************************************ 00:04:16.399 00:04:16.399 real 0m10.128s 00:04:16.399 user 0m3.249s 00:04:16.399 sys 0m5.013s 00:04:16.399 18:34:26 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:16.399 18:34:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:16.399 ************************************ 00:04:16.399 END TEST acl 00:04:16.399 ************************************ 00:04:16.399 18:34:26 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:16.399 18:34:26 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:16.399 18:34:26 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:16.399 18:34:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:16.399 ************************************ 00:04:16.399 START TEST hugepages 00:04:16.399 ************************************ 00:04:16.399 18:34:26 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:16.399 * Looking for test storage... 00:04:16.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 41109432 kB' 'MemAvailable: 44623996 kB' 'Buffers: 2704 kB' 'Cached: 12838656 kB' 'SwapCached: 0 kB' 'Active: 9830108 kB' 'Inactive: 3508168 kB' 'Active(anon): 9434528 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500340 kB' 'Mapped: 193396 kB' 'Shmem: 8937612 kB' 'KReclaimable: 209308 kB' 'Slab: 600128 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390820 kB' 'KernelStack: 13040 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562312 kB' 'Committed_AS: 10615152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196984 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.399 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.400 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:16.401 18:34:26 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:16.401 18:34:26 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:16.401 18:34:26 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:16.401 18:34:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.401 ************************************ 00:04:16.401 START TEST default_setup 00:04:16.401 ************************************ 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.401 18:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:17.333 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:17.333 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:17.333 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:17.333 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:17.333 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:17.333 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:17.333 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:17.333 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:17.333 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:17.333 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:17.333 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:17.333 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:17.333 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:17.333 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:17.333 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:17.333 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:18.718 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43222292 kB' 'MemAvailable: 46736856 kB' 'Buffers: 2704 kB' 'Cached: 12838756 kB' 'SwapCached: 0 kB' 'Active: 9849984 kB' 'Inactive: 3508168 kB' 'Active(anon): 9454404 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519756 kB' 'Mapped: 193140 kB' 'Shmem: 8937712 kB' 'KReclaimable: 209308 kB' 'Slab: 599512 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390204 kB' 'KernelStack: 13280 kB' 'PageTables: 10080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10635496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197452 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.720 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43224152 kB' 'MemAvailable: 46738716 kB' 'Buffers: 2704 kB' 'Cached: 12838756 kB' 'SwapCached: 0 kB' 'Active: 9843284 kB' 'Inactive: 3508168 kB' 'Active(anon): 9447704 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513248 kB' 'Mapped: 193268 kB' 'Shmem: 8937712 kB' 'KReclaimable: 209308 kB' 'Slab: 599508 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390200 kB' 'KernelStack: 12880 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10629024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.722 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43222688 kB' 'MemAvailable: 46737252 kB' 'Buffers: 2704 kB' 'Cached: 12838776 kB' 'SwapCached: 0 kB' 'Active: 9845400 kB' 'Inactive: 3508168 kB' 'Active(anon): 9449820 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515420 kB' 'Mapped: 193080 kB' 'Shmem: 8937732 kB' 'KReclaimable: 209308 kB' 'Slab: 599556 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390248 kB' 'KernelStack: 12752 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10633412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197160 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.723 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.724 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.725 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:18.726 nr_hugepages=1024 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.726 resv_hugepages=0 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.726 surplus_hugepages=0 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.726 anon_hugepages=0 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43219208 kB' 'MemAvailable: 46733772 kB' 'Buffers: 2704 kB' 'Cached: 12838776 kB' 'SwapCached: 0 kB' 'Active: 9847328 kB' 'Inactive: 3508168 kB' 'Active(anon): 9451748 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517344 kB' 'Mapped: 193556 kB' 'Shmem: 8937732 kB' 'KReclaimable: 209308 kB' 'Slab: 599556 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390248 kB' 'KernelStack: 12848 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10635556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.726 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.727 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.728 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19989516 kB' 'MemUsed: 12887424 kB' 'SwapCached: 0 kB' 'Active: 6483756 kB' 'Inactive: 3324284 kB' 'Active(anon): 6224744 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3324284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9505088 kB' 'Mapped: 99560 kB' 'AnonPages: 306076 kB' 'Shmem: 5921792 kB' 'KernelStack: 6120 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117932 kB' 'Slab: 326060 kB' 'SReclaimable: 117932 kB' 'SUnreclaim: 208128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.729 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.730 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:18.731 node0=1024 expecting 1024 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:18.731 00:04:18.731 real 0m2.321s 00:04:18.731 user 0m0.597s 00:04:18.731 sys 0m0.736s 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:18.731 18:34:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:18.731 ************************************ 00:04:18.731 END TEST default_setup 00:04:18.731 ************************************ 00:04:18.731 18:34:28 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:18.731 18:34:28 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.731 18:34:28 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.731 18:34:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.731 ************************************ 00:04:18.731 START TEST per_node_1G_alloc 00:04:18.731 ************************************ 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.731 18:34:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.697 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:19.697 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.697 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:19.697 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:19.697 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:19.697 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:19.697 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:19.697 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:19.697 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:19.961 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:19.961 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:19.961 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:19.961 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:19.961 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:19.961 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:19.961 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:19.961 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.961 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43211804 kB' 'MemAvailable: 46726368 kB' 'Buffers: 2704 kB' 'Cached: 12838864 kB' 'SwapCached: 0 kB' 'Active: 9843260 kB' 'Inactive: 3508168 kB' 'Active(anon): 9447680 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512912 kB' 'Mapped: 192704 kB' 'Shmem: 8937820 kB' 'KReclaimable: 209308 kB' 'Slab: 599516 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390208 kB' 'KernelStack: 12912 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10629616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197208 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.962 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43216944 kB' 'MemAvailable: 46731508 kB' 'Buffers: 2704 kB' 'Cached: 12838868 kB' 'SwapCached: 0 kB' 'Active: 9842872 kB' 'Inactive: 3508168 kB' 'Active(anon): 9447292 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512968 kB' 'Mapped: 192704 kB' 'Shmem: 8937824 kB' 'KReclaimable: 209308 kB' 'Slab: 599516 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390208 kB' 'KernelStack: 12896 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10629632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197224 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.963 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.964 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43217808 kB' 'MemAvailable: 46732372 kB' 'Buffers: 2704 kB' 'Cached: 12838872 kB' 'SwapCached: 0 kB' 'Active: 9842668 kB' 'Inactive: 3508168 kB' 'Active(anon): 9447088 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512692 kB' 'Mapped: 192688 kB' 'Shmem: 8937828 kB' 'KReclaimable: 209308 kB' 'Slab: 599596 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390288 kB' 'KernelStack: 12944 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10629656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197224 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.965 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.966 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:19.967 nr_hugepages=1024 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.967 resv_hugepages=0 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.967 surplus_hugepages=0 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.967 anon_hugepages=0 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43217808 kB' 'MemAvailable: 46732372 kB' 'Buffers: 2704 kB' 'Cached: 12838872 kB' 'SwapCached: 0 kB' 'Active: 9842856 kB' 'Inactive: 3508168 kB' 'Active(anon): 9447276 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512924 kB' 'Mapped: 192688 kB' 'Shmem: 8937828 kB' 'KReclaimable: 209308 kB' 'Slab: 599596 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390288 kB' 'KernelStack: 12976 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10629680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197224 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.967 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:19.968 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21052056 kB' 'MemUsed: 11824884 kB' 'SwapCached: 0 kB' 'Active: 6483976 kB' 'Inactive: 3324284 kB' 'Active(anon): 6224964 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3324284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9505088 kB' 'Mapped: 99420 kB' 'AnonPages: 306396 kB' 'Shmem: 5921792 kB' 'KernelStack: 6136 kB' 'PageTables: 3608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117932 kB' 'Slab: 326068 kB' 'SReclaimable: 117932 kB' 'SUnreclaim: 208136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.969 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.229 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 22166072 kB' 'MemUsed: 5498708 kB' 'SwapCached: 0 kB' 'Active: 3358980 kB' 'Inactive: 183884 kB' 'Active(anon): 3222412 kB' 'Inactive(anon): 0 kB' 'Active(file): 136568 kB' 'Inactive(file): 183884 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3336532 kB' 'Mapped: 93268 kB' 'AnonPages: 206536 kB' 'Shmem: 3016080 kB' 'KernelStack: 6840 kB' 'PageTables: 5100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91376 kB' 'Slab: 273528 kB' 'SReclaimable: 91376 kB' 'SUnreclaim: 182152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.230 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.231 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:20.231 node0=512 expecting 512 00:04:20.232 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.232 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.232 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.232 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:20.232 node1=512 expecting 512 00:04:20.232 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:20.232 00:04:20.232 real 0m1.418s 00:04:20.232 user 0m0.579s 00:04:20.232 sys 0m0.804s 00:04:20.232 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:20.232 18:34:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:20.232 ************************************ 00:04:20.232 END TEST per_node_1G_alloc 00:04:20.232 ************************************ 00:04:20.232 18:34:30 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:20.232 18:34:30 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:20.232 18:34:30 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:20.232 18:34:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:20.232 ************************************ 00:04:20.232 START TEST even_2G_alloc 00:04:20.232 ************************************ 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.232 18:34:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.611 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:21.611 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.611 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:21.611 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:21.611 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:21.611 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:21.611 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:21.611 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:21.611 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:21.611 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:21.611 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:21.611 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:21.611 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:21.611 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:21.611 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:21.611 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:21.611 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43209868 kB' 'MemAvailable: 46724432 kB' 'Buffers: 2704 kB' 'Cached: 12839004 kB' 'SwapCached: 0 kB' 'Active: 9843860 kB' 'Inactive: 3508168 kB' 'Active(anon): 9448280 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513948 kB' 'Mapped: 192720 kB' 'Shmem: 8937960 kB' 'KReclaimable: 209308 kB' 'Slab: 599492 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390184 kB' 'KernelStack: 12928 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10629692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197320 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.611 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43214120 kB' 'MemAvailable: 46728684 kB' 'Buffers: 2704 kB' 'Cached: 12839012 kB' 'SwapCached: 0 kB' 'Active: 9843036 kB' 'Inactive: 3508168 kB' 'Active(anon): 9447456 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512664 kB' 'Mapped: 192788 kB' 'Shmem: 8937968 kB' 'KReclaimable: 209308 kB' 'Slab: 599524 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390216 kB' 'KernelStack: 12880 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10629712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197272 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.612 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.613 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43217756 kB' 'MemAvailable: 46732320 kB' 'Buffers: 2704 kB' 'Cached: 12839028 kB' 'SwapCached: 0 kB' 'Active: 9843600 kB' 'Inactive: 3508168 kB' 'Active(anon): 9448020 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513196 kB' 'Mapped: 192712 kB' 'Shmem: 8937984 kB' 'KReclaimable: 209308 kB' 'Slab: 599504 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390196 kB' 'KernelStack: 12976 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10630104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197256 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.614 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.615 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.616 nr_hugepages=1024 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.616 resv_hugepages=0 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.616 surplus_hugepages=0 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.616 anon_hugepages=0 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43217912 kB' 'MemAvailable: 46732476 kB' 'Buffers: 2704 kB' 'Cached: 12839048 kB' 'SwapCached: 0 kB' 'Active: 9843008 kB' 'Inactive: 3508168 kB' 'Active(anon): 9447428 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512596 kB' 'Mapped: 192712 kB' 'Shmem: 8938004 kB' 'KReclaimable: 209308 kB' 'Slab: 599496 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390188 kB' 'KernelStack: 12976 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10630124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197256 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.616 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.617 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21057556 kB' 'MemUsed: 11819384 kB' 'SwapCached: 0 kB' 'Active: 6485236 kB' 'Inactive: 3324284 kB' 'Active(anon): 6226224 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3324284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9505168 kB' 'Mapped: 99440 kB' 'AnonPages: 307544 kB' 'Shmem: 5921872 kB' 'KernelStack: 6216 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117932 kB' 'Slab: 325952 kB' 'SReclaimable: 117932 kB' 'SUnreclaim: 208020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.618 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 22161892 kB' 'MemUsed: 5502888 kB' 'SwapCached: 0 kB' 'Active: 3358460 kB' 'Inactive: 183884 kB' 'Active(anon): 3221892 kB' 'Inactive(anon): 0 kB' 'Active(file): 136568 kB' 'Inactive(file): 183884 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3336612 kB' 'Mapped: 93272 kB' 'AnonPages: 205840 kB' 'Shmem: 3016160 kB' 'KernelStack: 6792 kB' 'PageTables: 4916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91376 kB' 'Slab: 273544 kB' 'SReclaimable: 91376 kB' 'SUnreclaim: 182168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:21.619 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.620 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:21.621 node0=512 expecting 512 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:21.621 node1=512 expecting 512 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:21.621 00:04:21.621 real 0m1.484s 00:04:21.621 user 0m0.637s 00:04:21.621 sys 0m0.815s 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:21.621 18:34:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:21.621 ************************************ 00:04:21.621 END TEST even_2G_alloc 00:04:21.621 ************************************ 00:04:21.621 18:34:31 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:21.621 18:34:31 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:21.621 18:34:31 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:21.621 18:34:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:21.621 ************************************ 00:04:21.621 START TEST odd_alloc 00:04:21.621 ************************************ 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.621 18:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.997 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:22.997 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:22.997 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:22.997 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:22.997 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:22.997 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:22.997 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:22.997 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:22.997 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:22.997 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:22.997 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:22.997 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:22.997 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:22.997 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:22.997 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:22.997 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:22.997 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43251464 kB' 'MemAvailable: 46766028 kB' 'Buffers: 2704 kB' 'Cached: 12839136 kB' 'SwapCached: 0 kB' 'Active: 9837508 kB' 'Inactive: 3508168 kB' 'Active(anon): 9441928 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507016 kB' 'Mapped: 191872 kB' 'Shmem: 8938092 kB' 'KReclaimable: 209308 kB' 'Slab: 599448 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390140 kB' 'KernelStack: 12816 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10600916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197224 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.997 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43254224 kB' 'MemAvailable: 46768788 kB' 'Buffers: 2704 kB' 'Cached: 12839136 kB' 'SwapCached: 0 kB' 'Active: 9838452 kB' 'Inactive: 3508168 kB' 'Active(anon): 9442872 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507984 kB' 'Mapped: 191872 kB' 'Shmem: 8938092 kB' 'KReclaimable: 209308 kB' 'Slab: 599448 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390140 kB' 'KernelStack: 13024 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10602300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197336 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.998 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.999 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43251792 kB' 'MemAvailable: 46766356 kB' 'Buffers: 2704 kB' 'Cached: 12839144 kB' 'SwapCached: 0 kB' 'Active: 9838308 kB' 'Inactive: 3508168 kB' 'Active(anon): 9442728 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507788 kB' 'Mapped: 191860 kB' 'Shmem: 8938100 kB' 'KReclaimable: 209308 kB' 'Slab: 599432 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390124 kB' 'KernelStack: 13408 kB' 'PageTables: 9856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10602320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197400 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.000 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.001 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:23.002 nr_hugepages=1025 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.002 resv_hugepages=0 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.002 surplus_hugepages=0 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.002 anon_hugepages=0 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43251324 kB' 'MemAvailable: 46765888 kB' 'Buffers: 2704 kB' 'Cached: 12839180 kB' 'SwapCached: 0 kB' 'Active: 9837928 kB' 'Inactive: 3508168 kB' 'Active(anon): 9442348 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507160 kB' 'Mapped: 191920 kB' 'Shmem: 8938136 kB' 'KReclaimable: 209308 kB' 'Slab: 599476 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390168 kB' 'KernelStack: 13008 kB' 'PageTables: 9208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10600976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197304 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.002 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.003 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21058964 kB' 'MemUsed: 11817976 kB' 'SwapCached: 0 kB' 'Active: 6482864 kB' 'Inactive: 3324284 kB' 'Active(anon): 6223852 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3324284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9505292 kB' 'Mapped: 98944 kB' 'AnonPages: 305048 kB' 'Shmem: 5921996 kB' 'KernelStack: 6168 kB' 'PageTables: 3560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117932 kB' 'Slab: 326156 kB' 'SReclaimable: 117932 kB' 'SUnreclaim: 208224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.004 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 22191592 kB' 'MemUsed: 5473188 kB' 'SwapCached: 0 kB' 'Active: 3354892 kB' 'Inactive: 183884 kB' 'Active(anon): 3218324 kB' 'Inactive(anon): 0 kB' 'Active(file): 136568 kB' 'Inactive(file): 183884 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3336612 kB' 'Mapped: 92976 kB' 'AnonPages: 202164 kB' 'Shmem: 3016160 kB' 'KernelStack: 6936 kB' 'PageTables: 5456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91376 kB' 'Slab: 273320 kB' 'SReclaimable: 91376 kB' 'SUnreclaim: 181944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.005 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:23.006 node0=512 expecting 513 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:23.006 node1=513 expecting 512 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:23.006 00:04:23.006 real 0m1.409s 00:04:23.006 user 0m0.594s 00:04:23.006 sys 0m0.781s 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.006 18:34:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:23.006 ************************************ 00:04:23.006 END TEST odd_alloc 00:04:23.006 ************************************ 00:04:23.264 18:34:33 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:23.264 18:34:33 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.264 18:34:33 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.264 18:34:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.264 ************************************ 00:04:23.264 START TEST custom_alloc 00:04:23.264 ************************************ 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:23.264 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.265 18:34:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:24.197 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:24.197 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:24.197 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:24.197 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:24.197 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:24.197 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:24.197 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:24.197 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:24.197 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:24.197 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:24.197 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:24.197 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:24.197 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:24.197 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:24.197 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:24.197 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:24.197 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42188584 kB' 'MemAvailable: 45703148 kB' 'Buffers: 2704 kB' 'Cached: 12839268 kB' 'SwapCached: 0 kB' 'Active: 9840676 kB' 'Inactive: 3508168 kB' 'Active(anon): 9445096 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510044 kB' 'Mapped: 192288 kB' 'Shmem: 8938224 kB' 'KReclaimable: 209308 kB' 'Slab: 599580 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390272 kB' 'KernelStack: 12832 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10604864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197144 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.461 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.462 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42187604 kB' 'MemAvailable: 45702168 kB' 'Buffers: 2704 kB' 'Cached: 12839268 kB' 'SwapCached: 0 kB' 'Active: 9842708 kB' 'Inactive: 3508168 kB' 'Active(anon): 9447128 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512144 kB' 'Mapped: 192704 kB' 'Shmem: 8938224 kB' 'KReclaimable: 209308 kB' 'Slab: 599580 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390272 kB' 'KernelStack: 12864 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10606476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197116 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.464 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42187176 kB' 'MemAvailable: 45701740 kB' 'Buffers: 2704 kB' 'Cached: 12839292 kB' 'SwapCached: 0 kB' 'Active: 9842028 kB' 'Inactive: 3508168 kB' 'Active(anon): 9446448 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511428 kB' 'Mapped: 192760 kB' 'Shmem: 8938248 kB' 'KReclaimable: 209308 kB' 'Slab: 599612 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390304 kB' 'KernelStack: 12880 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10606500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197100 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.465 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:24.466 nr_hugepages=1536 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.466 resv_hugepages=0 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.466 surplus_hugepages=0 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.466 anon_hugepages=0 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42186924 kB' 'MemAvailable: 45701488 kB' 'Buffers: 2704 kB' 'Cached: 12839312 kB' 'SwapCached: 0 kB' 'Active: 9836516 kB' 'Inactive: 3508168 kB' 'Active(anon): 9440936 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505900 kB' 'Mapped: 192324 kB' 'Shmem: 8938268 kB' 'KReclaimable: 209308 kB' 'Slab: 599612 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390304 kB' 'KernelStack: 12880 kB' 'PageTables: 7932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10600400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.466 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.467 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21050188 kB' 'MemUsed: 11826752 kB' 'SwapCached: 0 kB' 'Active: 6482172 kB' 'Inactive: 3324284 kB' 'Active(anon): 6223160 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3324284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9505400 kB' 'Mapped: 98896 kB' 'AnonPages: 304296 kB' 'Shmem: 5922104 kB' 'KernelStack: 6184 kB' 'PageTables: 3500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117932 kB' 'Slab: 326152 kB' 'SReclaimable: 117932 kB' 'SUnreclaim: 208220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.468 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.469 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 21136484 kB' 'MemUsed: 6528296 kB' 'SwapCached: 0 kB' 'Active: 3354292 kB' 'Inactive: 183884 kB' 'Active(anon): 3217724 kB' 'Inactive(anon): 0 kB' 'Active(file): 136568 kB' 'Inactive(file): 183884 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3336636 kB' 'Mapped: 92948 kB' 'AnonPages: 201608 kB' 'Shmem: 3016184 kB' 'KernelStack: 6680 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91376 kB' 'Slab: 273460 kB' 'SReclaimable: 91376 kB' 'SUnreclaim: 182084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.470 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:24.471 node0=512 expecting 512 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:24.471 node1=1024 expecting 1024 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:24.471 00:04:24.471 real 0m1.340s 00:04:24.471 user 0m0.570s 00:04:24.471 sys 0m0.733s 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.471 18:34:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:24.471 ************************************ 00:04:24.471 END TEST custom_alloc 00:04:24.471 ************************************ 00:04:24.471 18:34:34 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:24.471 18:34:34 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.471 18:34:34 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.471 18:34:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.471 ************************************ 00:04:24.471 START TEST no_shrink_alloc 00:04:24.471 ************************************ 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.471 18:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:25.849 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:25.849 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.849 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:25.849 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:25.849 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:25.849 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:25.849 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:25.849 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:25.849 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:25.849 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:25.849 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:25.849 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:25.849 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:25.849 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:25.849 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:25.849 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:25.849 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43124344 kB' 'MemAvailable: 46638908 kB' 'Buffers: 2704 kB' 'Cached: 12839396 kB' 'SwapCached: 0 kB' 'Active: 9837116 kB' 'Inactive: 3508168 kB' 'Active(anon): 9441536 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506400 kB' 'Mapped: 191876 kB' 'Shmem: 8938352 kB' 'KReclaimable: 209308 kB' 'Slab: 599420 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390112 kB' 'KernelStack: 12928 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10600592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197208 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.849 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43133864 kB' 'MemAvailable: 46648428 kB' 'Buffers: 2704 kB' 'Cached: 12839400 kB' 'SwapCached: 0 kB' 'Active: 9837536 kB' 'Inactive: 3508168 kB' 'Active(anon): 9441956 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506856 kB' 'Mapped: 191856 kB' 'Shmem: 8938356 kB' 'KReclaimable: 209308 kB' 'Slab: 599388 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390080 kB' 'KernelStack: 12928 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10600244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197160 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.850 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.851 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43134472 kB' 'MemAvailable: 46649036 kB' 'Buffers: 2704 kB' 'Cached: 12839416 kB' 'SwapCached: 0 kB' 'Active: 9836976 kB' 'Inactive: 3508168 kB' 'Active(anon): 9441396 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506232 kB' 'Mapped: 191856 kB' 'Shmem: 8938372 kB' 'KReclaimable: 209308 kB' 'Slab: 599420 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390112 kB' 'KernelStack: 12848 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10600264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197128 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.852 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.853 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.854 nr_hugepages=1024 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.854 resv_hugepages=0 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.854 surplus_hugepages=0 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.854 anon_hugepages=0 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43134472 kB' 'MemAvailable: 46649036 kB' 'Buffers: 2704 kB' 'Cached: 12839440 kB' 'SwapCached: 0 kB' 'Active: 9836652 kB' 'Inactive: 3508168 kB' 'Active(anon): 9441072 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505912 kB' 'Mapped: 191856 kB' 'Shmem: 8938396 kB' 'KReclaimable: 209308 kB' 'Slab: 599420 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390112 kB' 'KernelStack: 12880 kB' 'PageTables: 7796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10600292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197144 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.854 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.855 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19930920 kB' 'MemUsed: 12946020 kB' 'SwapCached: 0 kB' 'Active: 6482688 kB' 'Inactive: 3324284 kB' 'Active(anon): 6223676 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3324284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9505528 kB' 'Mapped: 98908 kB' 'AnonPages: 304632 kB' 'Shmem: 5922232 kB' 'KernelStack: 6200 kB' 'PageTables: 3444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117932 kB' 'Slab: 326148 kB' 'SReclaimable: 117932 kB' 'SUnreclaim: 208216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.856 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.857 node0=1024 expecting 1024 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.857 18:34:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:27.230 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:27.230 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:27.230 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:27.230 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:27.230 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:27.230 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:27.230 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:27.230 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:27.230 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:27.230 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:27.230 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:27.230 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:27.230 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:27.230 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:27.230 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:27.230 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:27.230 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:27.230 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43131092 kB' 'MemAvailable: 46645656 kB' 'Buffers: 2704 kB' 'Cached: 12839512 kB' 'SwapCached: 0 kB' 'Active: 9837552 kB' 'Inactive: 3508168 kB' 'Active(anon): 9441972 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506656 kB' 'Mapped: 191872 kB' 'Shmem: 8938468 kB' 'KReclaimable: 209308 kB' 'Slab: 599544 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390236 kB' 'KernelStack: 12912 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10601036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.230 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43134144 kB' 'MemAvailable: 46648708 kB' 'Buffers: 2704 kB' 'Cached: 12839516 kB' 'SwapCached: 0 kB' 'Active: 9837552 kB' 'Inactive: 3508168 kB' 'Active(anon): 9441972 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506704 kB' 'Mapped: 191876 kB' 'Shmem: 8938472 kB' 'KReclaimable: 209308 kB' 'Slab: 599536 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390228 kB' 'KernelStack: 12880 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10601056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197224 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.231 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43133764 kB' 'MemAvailable: 46648328 kB' 'Buffers: 2704 kB' 'Cached: 12839532 kB' 'SwapCached: 0 kB' 'Active: 9837140 kB' 'Inactive: 3508168 kB' 'Active(anon): 9441560 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506276 kB' 'Mapped: 191860 kB' 'Shmem: 8938488 kB' 'KReclaimable: 209308 kB' 'Slab: 599592 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390284 kB' 'KernelStack: 12960 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10601076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.232 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:27.233 nr_hugepages=1024 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.233 resv_hugepages=0 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.233 surplus_hugepages=0 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.233 anon_hugepages=0 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43133764 kB' 'MemAvailable: 46648328 kB' 'Buffers: 2704 kB' 'Cached: 12839556 kB' 'SwapCached: 0 kB' 'Active: 9837180 kB' 'Inactive: 3508168 kB' 'Active(anon): 9441600 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506268 kB' 'Mapped: 191860 kB' 'Shmem: 8938512 kB' 'KReclaimable: 209308 kB' 'Slab: 599592 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390284 kB' 'KernelStack: 12960 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10601100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.233 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19937788 kB' 'MemUsed: 12939152 kB' 'SwapCached: 0 kB' 'Active: 6482716 kB' 'Inactive: 3324284 kB' 'Active(anon): 6223704 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3324284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9505632 kB' 'Mapped: 98912 kB' 'AnonPages: 304512 kB' 'Shmem: 5922336 kB' 'KernelStack: 6280 kB' 'PageTables: 3596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117932 kB' 'Slab: 326232 kB' 'SReclaimable: 117932 kB' 'SUnreclaim: 208300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.234 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:27.235 node0=1024 expecting 1024 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:27.235 00:04:27.235 real 0m2.734s 00:04:27.235 user 0m1.149s 00:04:27.235 sys 0m1.514s 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:27.235 18:34:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:27.235 ************************************ 00:04:27.235 END TEST no_shrink_alloc 00:04:27.235 ************************************ 00:04:27.235 18:34:37 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:27.235 18:34:37 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:27.235 18:34:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:27.235 18:34:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.235 18:34:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.235 18:34:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.235 18:34:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.235 18:34:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:27.235 18:34:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.235 18:34:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.235 18:34:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.235 18:34:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.235 18:34:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:27.235 18:34:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:27.235 00:04:27.235 real 0m11.081s 00:04:27.235 user 0m4.298s 00:04:27.235 sys 0m5.608s 00:04:27.235 18:34:37 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:27.235 18:34:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.235 ************************************ 00:04:27.235 END TEST hugepages 00:04:27.235 ************************************ 00:04:27.235 18:34:37 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:27.235 18:34:37 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:27.235 18:34:37 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:27.235 18:34:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:27.235 ************************************ 00:04:27.235 START TEST driver 00:04:27.235 ************************************ 00:04:27.235 18:34:37 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:27.493 * Looking for test storage... 00:04:27.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:27.493 18:34:37 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:27.493 18:34:37 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.493 18:34:37 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.019 18:34:39 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:30.019 18:34:39 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:30.019 18:34:39 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:30.019 18:34:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:30.019 ************************************ 00:04:30.019 START TEST guess_driver 00:04:30.019 ************************************ 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:30.019 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:30.019 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:30.019 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:30.019 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:30.019 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:30.019 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:30.019 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:30.019 Looking for driver=vfio-pci 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.019 18:34:39 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.954 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.954 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.954 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.954 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.954 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.954 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.954 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.884 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.884 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.884 18:34:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.884 18:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:31.885 18:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:31.885 18:34:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.885 18:34:42 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:34.410 00:04:34.410 real 0m4.593s 00:04:34.410 user 0m1.024s 00:04:34.410 sys 0m1.713s 00:04:34.410 18:34:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:34.410 18:34:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:34.410 ************************************ 00:04:34.410 END TEST guess_driver 00:04:34.410 ************************************ 00:04:34.410 00:04:34.410 real 0m6.889s 00:04:34.410 user 0m1.597s 00:04:34.410 sys 0m2.602s 00:04:34.410 18:34:44 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:34.410 18:34:44 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:34.410 ************************************ 00:04:34.410 END TEST driver 00:04:34.410 ************************************ 00:04:34.410 18:34:44 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:34.410 18:34:44 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:34.410 18:34:44 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.410 18:34:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:34.410 ************************************ 00:04:34.410 START TEST devices 00:04:34.410 ************************************ 00:04:34.410 18:34:44 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:34.410 * Looking for test storage... 00:04:34.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:34.410 18:34:44 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:34.410 18:34:44 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:34.410 18:34:44 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.410 18:34:44 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:35.785 18:34:45 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:35.785 18:34:45 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:35.785 18:34:45 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:35.785 18:34:45 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:35.785 18:34:45 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:35.785 18:34:45 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:35.785 18:34:45 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.785 18:34:45 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:35.785 18:34:45 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:35.785 18:34:45 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:35.785 No valid GPT data, bailing 00:04:35.785 18:34:45 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:35.785 18:34:45 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:35.785 18:34:45 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:35.785 18:34:45 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:35.785 18:34:45 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:35.785 18:34:45 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:35.785 18:34:45 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:35.785 18:34:45 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.785 18:34:45 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.785 18:34:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:35.785 ************************************ 00:04:35.785 START TEST nvme_mount 00:04:35.785 ************************************ 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:35.785 18:34:45 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:36.719 Creating new GPT entries in memory. 00:04:36.719 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:36.719 other utilities. 00:04:36.719 18:34:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:36.719 18:34:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.719 18:34:46 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:36.719 18:34:46 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:36.719 18:34:46 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:37.651 Creating new GPT entries in memory. 00:04:37.651 The operation has completed successfully. 00:04:37.651 18:34:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:37.651 18:34:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:37.651 18:34:47 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1244625 00:04:37.651 18:34:47 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.651 18:34:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:37.651 18:34:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.651 18:34:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:37.651 18:34:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:37.909 18:34:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.909 18:34:48 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.909 18:34:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:37.909 18:34:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:37.909 18:34:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.909 18:34:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.909 18:34:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:37.909 18:34:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.909 18:34:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:37.909 18:34:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:37.909 18:34:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.909 18:34:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:37.909 18:34:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:37.909 18:34:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.909 18:34:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.841 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.842 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.842 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.842 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.842 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.842 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.842 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.842 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.842 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.099 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.099 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:39.099 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.100 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.100 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.100 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:39.100 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.100 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.100 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.100 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:39.100 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.100 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.100 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.357 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:39.357 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:39.357 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:39.357 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.357 18:34:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:40.730 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:40.731 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:40.731 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:40.731 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:40.731 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:40.731 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:40.731 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:40.731 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.731 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:40.731 18:34:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:40.731 18:34:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.731 18:34:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.664 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.665 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.665 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.665 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.665 18:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.923 18:34:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.923 18:34:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:41.923 18:34:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:41.923 18:34:52 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:41.923 18:34:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.923 18:34:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.923 18:34:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:41.923 18:34:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:41.923 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:41.923 00:04:41.923 real 0m6.161s 00:04:41.923 user 0m1.407s 00:04:41.923 sys 0m2.358s 00:04:41.923 18:34:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.923 18:34:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:41.923 ************************************ 00:04:41.923 END TEST nvme_mount 00:04:41.923 ************************************ 00:04:41.923 18:34:52 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:41.923 18:34:52 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.923 18:34:52 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.923 18:34:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:41.923 ************************************ 00:04:41.923 START TEST dm_mount 00:04:41.923 ************************************ 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:41.923 18:34:52 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:42.855 Creating new GPT entries in memory. 00:04:42.855 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:42.855 other utilities. 00:04:42.855 18:34:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:42.855 18:34:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.855 18:34:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.855 18:34:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.855 18:34:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:44.228 Creating new GPT entries in memory. 00:04:44.228 The operation has completed successfully. 00:04:44.228 18:34:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:44.228 18:34:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.228 18:34:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.228 18:34:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.228 18:34:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:45.161 The operation has completed successfully. 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1246899 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.161 18:34:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.094 18:34:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.538 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.539 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.539 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.539 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:47.539 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:47.539 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:47.539 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.539 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:47.539 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:47.539 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.539 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:47.539 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.539 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:47.539 18:34:57 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:47.539 00:04:47.539 real 0m5.455s 00:04:47.539 user 0m0.940s 00:04:47.539 sys 0m1.420s 00:04:47.539 18:34:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.539 18:34:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:47.539 ************************************ 00:04:47.539 END TEST dm_mount 00:04:47.539 ************************************ 00:04:47.539 18:34:57 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:47.539 18:34:57 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:47.539 18:34:57 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.539 18:34:57 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.539 18:34:57 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:47.539 18:34:57 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.539 18:34:57 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.797 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:47.797 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:47.797 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:47.797 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:47.797 18:34:57 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:47.797 18:34:57 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.797 18:34:57 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:47.797 18:34:57 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.797 18:34:57 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:47.797 18:34:57 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.797 18:34:57 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:47.797 00:04:47.797 real 0m13.417s 00:04:47.797 user 0m2.941s 00:04:47.797 sys 0m4.738s 00:04:47.797 18:34:57 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.797 18:34:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.797 ************************************ 00:04:47.797 END TEST devices 00:04:47.797 ************************************ 00:04:47.797 00:04:47.797 real 0m41.760s 00:04:47.797 user 0m12.177s 00:04:47.797 sys 0m18.128s 00:04:47.797 18:34:57 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.797 18:34:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.797 ************************************ 00:04:47.797 END TEST setup.sh 00:04:47.797 ************************************ 00:04:47.797 18:34:57 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:48.729 Hugepages 00:04:48.729 node hugesize free / total 00:04:48.988 node0 1048576kB 0 / 0 00:04:48.988 node0 2048kB 2048 / 2048 00:04:48.988 node1 1048576kB 0 / 0 00:04:48.988 node1 2048kB 0 / 0 00:04:48.988 00:04:48.988 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:48.988 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:48.988 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:48.988 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:48.988 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:48.988 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:48.988 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:48.988 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:48.988 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:48.988 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:48.988 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:48.988 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:48.988 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:48.988 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:48.988 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:48.988 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:48.988 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:48.988 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:48.988 18:34:59 -- spdk/autotest.sh@130 -- # uname -s 00:04:48.988 18:34:59 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:48.988 18:34:59 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:48.988 18:34:59 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:49.920 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:49.920 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:49.921 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:49.921 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:49.921 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:49.921 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:49.921 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:49.921 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:49.921 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:49.921 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:49.921 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:49.921 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:49.921 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:50.178 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:50.178 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:50.178 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:51.117 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:51.117 18:35:01 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:52.491 18:35:02 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:52.491 18:35:02 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:52.491 18:35:02 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:52.491 18:35:02 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:52.491 18:35:02 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:52.491 18:35:02 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:52.491 18:35:02 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.491 18:35:02 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:52.491 18:35:02 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:52.491 18:35:02 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:52.491 18:35:02 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:04:52.491 18:35:02 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.425 Waiting for block devices as requested 00:04:53.425 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:53.425 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:53.425 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:53.425 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:53.683 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:53.683 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:53.683 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:53.683 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:53.940 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:53.940 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:53.940 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:53.940 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:54.197 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:54.197 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:54.197 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:54.197 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:54.455 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:54.455 18:35:04 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:54.455 18:35:04 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:54.455 18:35:04 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:04:54.455 18:35:04 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:04:54.455 18:35:04 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:54.455 18:35:04 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:54.455 18:35:04 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:54.455 18:35:04 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:54.455 18:35:04 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:54.455 18:35:04 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:54.455 18:35:04 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:54.455 18:35:04 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:54.455 18:35:04 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:54.455 18:35:04 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:04:54.455 18:35:04 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:54.455 18:35:04 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:54.455 18:35:04 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:54.455 18:35:04 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:54.455 18:35:04 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:54.455 18:35:04 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:54.455 18:35:04 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:54.455 18:35:04 -- common/autotest_common.sh@1553 -- # continue 00:04:54.455 18:35:04 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:54.455 18:35:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.455 18:35:04 -- common/autotest_common.sh@10 -- # set +x 00:04:54.455 18:35:04 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:54.455 18:35:04 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:54.455 18:35:04 -- common/autotest_common.sh@10 -- # set +x 00:04:54.455 18:35:04 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.828 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:55.828 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:55.828 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:55.828 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:55.828 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:55.828 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:55.828 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:55.828 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:55.828 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:55.828 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:55.828 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:55.828 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:55.828 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:55.828 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:55.828 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:55.828 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:56.762 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:56.762 18:35:07 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:56.762 18:35:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.762 18:35:07 -- common/autotest_common.sh@10 -- # set +x 00:04:56.762 18:35:07 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:56.762 18:35:07 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:56.762 18:35:07 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:56.762 18:35:07 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:56.762 18:35:07 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:56.762 18:35:07 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:56.762 18:35:07 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:56.762 18:35:07 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:56.762 18:35:07 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:56.762 18:35:07 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:56.762 18:35:07 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:57.020 18:35:07 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:57.020 18:35:07 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:04:57.020 18:35:07 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:57.020 18:35:07 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:57.020 18:35:07 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:04:57.020 18:35:07 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:57.020 18:35:07 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:04:57.020 18:35:07 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:04:57.020 18:35:07 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:04:57.020 18:35:07 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=1252065 00:04:57.020 18:35:07 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.020 18:35:07 -- common/autotest_common.sh@1594 -- # waitforlisten 1252065 00:04:57.020 18:35:07 -- common/autotest_common.sh@827 -- # '[' -z 1252065 ']' 00:04:57.020 18:35:07 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.020 18:35:07 -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:57.020 18:35:07 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.020 18:35:07 -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:57.020 18:35:07 -- common/autotest_common.sh@10 -- # set +x 00:04:57.020 [2024-07-20 18:35:07.200912] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:04:57.020 [2024-07-20 18:35:07.201002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1252065 ] 00:04:57.020 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.020 [2024-07-20 18:35:07.265159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.279 [2024-07-20 18:35:07.355326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.536 18:35:07 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:57.536 18:35:07 -- common/autotest_common.sh@860 -- # return 0 00:04:57.536 18:35:07 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:04:57.536 18:35:07 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:04:57.536 18:35:07 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:00.817 nvme0n1 00:05:00.817 18:35:10 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:00.817 [2024-07-20 18:35:10.901586] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:00.817 [2024-07-20 18:35:10.901636] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:00.817 request: 00:05:00.817 { 00:05:00.817 "nvme_ctrlr_name": "nvme0", 00:05:00.817 "password": "test", 00:05:00.817 "method": "bdev_nvme_opal_revert", 00:05:00.817 "req_id": 1 00:05:00.817 } 00:05:00.817 Got JSON-RPC error response 00:05:00.817 response: 00:05:00.817 { 00:05:00.817 "code": -32603, 00:05:00.817 "message": "Internal error" 00:05:00.817 } 00:05:00.817 18:35:10 -- common/autotest_common.sh@1600 -- # true 00:05:00.817 18:35:10 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:00.817 18:35:10 -- common/autotest_common.sh@1604 -- # killprocess 1252065 00:05:00.817 18:35:10 -- common/autotest_common.sh@946 -- # '[' -z 1252065 ']' 00:05:00.817 18:35:10 -- common/autotest_common.sh@950 -- # kill -0 1252065 00:05:00.817 18:35:10 -- common/autotest_common.sh@951 -- # uname 00:05:00.817 18:35:10 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:00.817 18:35:10 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1252065 00:05:00.817 18:35:10 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:00.817 18:35:10 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:00.817 18:35:10 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1252065' 00:05:00.817 killing process with pid 1252065 00:05:00.817 18:35:10 -- common/autotest_common.sh@965 -- # kill 1252065 00:05:00.817 18:35:10 -- common/autotest_common.sh@970 -- # wait 1252065 00:05:02.751 18:35:12 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:02.751 18:35:12 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:02.751 18:35:12 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:02.751 18:35:12 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:02.751 18:35:12 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:02.751 18:35:12 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:02.751 18:35:12 -- common/autotest_common.sh@10 -- # set +x 00:05:02.751 18:35:12 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:02.751 18:35:12 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:02.751 18:35:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.751 18:35:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.751 18:35:12 -- common/autotest_common.sh@10 -- # set +x 00:05:02.751 ************************************ 00:05:02.751 START TEST env 00:05:02.751 ************************************ 00:05:02.751 18:35:12 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:02.751 * Looking for test storage... 00:05:02.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:02.751 18:35:12 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:02.751 18:35:12 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.751 18:35:12 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.751 18:35:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.751 ************************************ 00:05:02.751 START TEST env_memory 00:05:02.751 ************************************ 00:05:02.751 18:35:12 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:02.751 00:05:02.751 00:05:02.751 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.751 http://cunit.sourceforge.net/ 00:05:02.751 00:05:02.751 00:05:02.751 Suite: memory 00:05:02.751 Test: alloc and free memory map ...[2024-07-20 18:35:12.846031] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:02.751 passed 00:05:02.751 Test: mem map translation ...[2024-07-20 18:35:12.866422] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:02.751 [2024-07-20 18:35:12.866445] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:02.751 [2024-07-20 18:35:12.866495] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:02.751 [2024-07-20 18:35:12.866507] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:02.751 passed 00:05:02.751 Test: mem map registration ...[2024-07-20 18:35:12.907230] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:02.751 [2024-07-20 18:35:12.907250] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:02.751 passed 00:05:02.751 Test: mem map adjacent registrations ...passed 00:05:02.751 00:05:02.751 Run Summary: Type Total Ran Passed Failed Inactive 00:05:02.751 suites 1 1 n/a 0 0 00:05:02.751 tests 4 4 4 0 0 00:05:02.751 asserts 152 152 152 0 n/a 00:05:02.751 00:05:02.751 Elapsed time = 0.142 seconds 00:05:02.751 00:05:02.751 real 0m0.150s 00:05:02.751 user 0m0.140s 00:05:02.751 sys 0m0.009s 00:05:02.751 18:35:12 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.751 18:35:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:02.751 ************************************ 00:05:02.751 END TEST env_memory 00:05:02.751 ************************************ 00:05:02.751 18:35:12 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:02.751 18:35:12 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.751 18:35:12 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.751 18:35:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.751 ************************************ 00:05:02.751 START TEST env_vtophys 00:05:02.751 ************************************ 00:05:02.751 18:35:13 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:02.751 EAL: lib.eal log level changed from notice to debug 00:05:02.751 EAL: Detected lcore 0 as core 0 on socket 0 00:05:02.751 EAL: Detected lcore 1 as core 1 on socket 0 00:05:02.751 EAL: Detected lcore 2 as core 2 on socket 0 00:05:02.751 EAL: Detected lcore 3 as core 3 on socket 0 00:05:02.751 EAL: Detected lcore 4 as core 4 on socket 0 00:05:02.751 EAL: Detected lcore 5 as core 5 on socket 0 00:05:02.751 EAL: Detected lcore 6 as core 8 on socket 0 00:05:02.751 EAL: Detected lcore 7 as core 9 on socket 0 00:05:02.751 EAL: Detected lcore 8 as core 10 on socket 0 00:05:02.751 EAL: Detected lcore 9 as core 11 on socket 0 00:05:02.751 EAL: Detected lcore 10 as core 12 on socket 0 00:05:02.751 EAL: Detected lcore 11 as core 13 on socket 0 00:05:02.751 EAL: Detected lcore 12 as core 0 on socket 1 00:05:02.751 EAL: Detected lcore 13 as core 1 on socket 1 00:05:02.751 EAL: Detected lcore 14 as core 2 on socket 1 00:05:02.751 EAL: Detected lcore 15 as core 3 on socket 1 00:05:02.751 EAL: Detected lcore 16 as core 4 on socket 1 00:05:02.751 EAL: Detected lcore 17 as core 5 on socket 1 00:05:02.751 EAL: Detected lcore 18 as core 8 on socket 1 00:05:02.751 EAL: Detected lcore 19 as core 9 on socket 1 00:05:02.751 EAL: Detected lcore 20 as core 10 on socket 1 00:05:02.751 EAL: Detected lcore 21 as core 11 on socket 1 00:05:02.751 EAL: Detected lcore 22 as core 12 on socket 1 00:05:02.751 EAL: Detected lcore 23 as core 13 on socket 1 00:05:02.751 EAL: Detected lcore 24 as core 0 on socket 0 00:05:02.751 EAL: Detected lcore 25 as core 1 on socket 0 00:05:02.751 EAL: Detected lcore 26 as core 2 on socket 0 00:05:02.751 EAL: Detected lcore 27 as core 3 on socket 0 00:05:02.751 EAL: Detected lcore 28 as core 4 on socket 0 00:05:02.751 EAL: Detected lcore 29 as core 5 on socket 0 00:05:02.751 EAL: Detected lcore 30 as core 8 on socket 0 00:05:02.751 EAL: Detected lcore 31 as core 9 on socket 0 00:05:02.751 EAL: Detected lcore 32 as core 10 on socket 0 00:05:02.751 EAL: Detected lcore 33 as core 11 on socket 0 00:05:02.751 EAL: Detected lcore 34 as core 12 on socket 0 00:05:02.751 EAL: Detected lcore 35 as core 13 on socket 0 00:05:02.751 EAL: Detected lcore 36 as core 0 on socket 1 00:05:02.751 EAL: Detected lcore 37 as core 1 on socket 1 00:05:02.751 EAL: Detected lcore 38 as core 2 on socket 1 00:05:02.751 EAL: Detected lcore 39 as core 3 on socket 1 00:05:02.751 EAL: Detected lcore 40 as core 4 on socket 1 00:05:02.751 EAL: Detected lcore 41 as core 5 on socket 1 00:05:02.751 EAL: Detected lcore 42 as core 8 on socket 1 00:05:02.751 EAL: Detected lcore 43 as core 9 on socket 1 00:05:02.751 EAL: Detected lcore 44 as core 10 on socket 1 00:05:02.751 EAL: Detected lcore 45 as core 11 on socket 1 00:05:02.751 EAL: Detected lcore 46 as core 12 on socket 1 00:05:02.751 EAL: Detected lcore 47 as core 13 on socket 1 00:05:02.751 EAL: Maximum logical cores by configuration: 128 00:05:02.751 EAL: Detected CPU lcores: 48 00:05:02.751 EAL: Detected NUMA nodes: 2 00:05:02.751 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:02.751 EAL: Detected shared linkage of DPDK 00:05:02.751 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:02.751 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:02.751 EAL: Registered [vdev] bus. 00:05:02.751 EAL: bus.vdev log level changed from disabled to notice 00:05:02.751 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:02.751 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:02.751 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:02.751 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:02.751 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:02.751 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:02.751 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:02.751 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:02.751 EAL: No shared files mode enabled, IPC will be disabled 00:05:02.751 EAL: No shared files mode enabled, IPC is disabled 00:05:02.751 EAL: Bus pci wants IOVA as 'DC' 00:05:02.751 EAL: Bus vdev wants IOVA as 'DC' 00:05:02.751 EAL: Buses did not request a specific IOVA mode. 00:05:02.751 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:02.751 EAL: Selected IOVA mode 'VA' 00:05:02.751 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.751 EAL: Probing VFIO support... 00:05:02.751 EAL: IOMMU type 1 (Type 1) is supported 00:05:02.751 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:02.751 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:02.751 EAL: VFIO support initialized 00:05:02.751 EAL: Ask a virtual area of 0x2e000 bytes 00:05:02.751 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:02.751 EAL: Setting up physically contiguous memory... 00:05:02.751 EAL: Setting maximum number of open files to 524288 00:05:02.751 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:02.751 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:02.751 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:02.751 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.751 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:02.751 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.751 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.751 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:02.751 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:02.751 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.751 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:02.751 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.751 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.751 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:02.751 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:02.751 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.751 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:02.751 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.751 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.751 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:02.751 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:02.751 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.751 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:02.751 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.751 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.751 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:02.751 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:02.751 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:02.751 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.751 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:02.751 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:02.751 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.751 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:02.751 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:02.751 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.751 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:02.751 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:02.751 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.751 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:02.751 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:02.751 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.751 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:02.751 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:02.751 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.751 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:02.751 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:02.751 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.751 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:02.751 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:02.751 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.751 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:02.751 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:02.751 EAL: Hugepages will be freed exactly as allocated. 00:05:02.751 EAL: No shared files mode enabled, IPC is disabled 00:05:02.751 EAL: No shared files mode enabled, IPC is disabled 00:05:02.751 EAL: TSC frequency is ~2700000 KHz 00:05:02.751 EAL: Main lcore 0 is ready (tid=7fa50a168a00;cpuset=[0]) 00:05:02.751 EAL: Trying to obtain current memory policy. 00:05:02.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.752 EAL: Restoring previous memory policy: 0 00:05:02.752 EAL: request: mp_malloc_sync 00:05:02.752 EAL: No shared files mode enabled, IPC is disabled 00:05:02.752 EAL: Heap on socket 0 was expanded by 2MB 00:05:02.752 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:03.010 EAL: Mem event callback 'spdk:(nil)' registered 00:05:03.010 00:05:03.010 00:05:03.010 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.010 http://cunit.sourceforge.net/ 00:05:03.010 00:05:03.010 00:05:03.010 Suite: components_suite 00:05:03.010 Test: vtophys_malloc_test ...passed 00:05:03.010 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:03.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.010 EAL: Restoring previous memory policy: 4 00:05:03.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.010 EAL: request: mp_malloc_sync 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: Heap on socket 0 was expanded by 4MB 00:05:03.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.010 EAL: request: mp_malloc_sync 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: Heap on socket 0 was shrunk by 4MB 00:05:03.010 EAL: Trying to obtain current memory policy. 00:05:03.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.010 EAL: Restoring previous memory policy: 4 00:05:03.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.010 EAL: request: mp_malloc_sync 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: Heap on socket 0 was expanded by 6MB 00:05:03.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.010 EAL: request: mp_malloc_sync 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: Heap on socket 0 was shrunk by 6MB 00:05:03.010 EAL: Trying to obtain current memory policy. 00:05:03.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.010 EAL: Restoring previous memory policy: 4 00:05:03.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.010 EAL: request: mp_malloc_sync 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: Heap on socket 0 was expanded by 10MB 00:05:03.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.010 EAL: request: mp_malloc_sync 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: Heap on socket 0 was shrunk by 10MB 00:05:03.010 EAL: Trying to obtain current memory policy. 00:05:03.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.010 EAL: Restoring previous memory policy: 4 00:05:03.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.010 EAL: request: mp_malloc_sync 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: Heap on socket 0 was expanded by 18MB 00:05:03.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.010 EAL: request: mp_malloc_sync 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: Heap on socket 0 was shrunk by 18MB 00:05:03.010 EAL: Trying to obtain current memory policy. 00:05:03.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.010 EAL: Restoring previous memory policy: 4 00:05:03.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.010 EAL: request: mp_malloc_sync 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: Heap on socket 0 was expanded by 34MB 00:05:03.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.010 EAL: request: mp_malloc_sync 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: Heap on socket 0 was shrunk by 34MB 00:05:03.010 EAL: Trying to obtain current memory policy. 00:05:03.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.010 EAL: Restoring previous memory policy: 4 00:05:03.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.010 EAL: request: mp_malloc_sync 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: Heap on socket 0 was expanded by 66MB 00:05:03.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.010 EAL: request: mp_malloc_sync 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: Heap on socket 0 was shrunk by 66MB 00:05:03.010 EAL: Trying to obtain current memory policy. 00:05:03.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.010 EAL: Restoring previous memory policy: 4 00:05:03.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.010 EAL: request: mp_malloc_sync 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: Heap on socket 0 was expanded by 130MB 00:05:03.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.010 EAL: request: mp_malloc_sync 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: Heap on socket 0 was shrunk by 130MB 00:05:03.010 EAL: Trying to obtain current memory policy. 00:05:03.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.010 EAL: Restoring previous memory policy: 4 00:05:03.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.010 EAL: request: mp_malloc_sync 00:05:03.010 EAL: No shared files mode enabled, IPC is disabled 00:05:03.010 EAL: Heap on socket 0 was expanded by 258MB 00:05:03.267 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.267 EAL: request: mp_malloc_sync 00:05:03.267 EAL: No shared files mode enabled, IPC is disabled 00:05:03.267 EAL: Heap on socket 0 was shrunk by 258MB 00:05:03.268 EAL: Trying to obtain current memory policy. 00:05:03.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.268 EAL: Restoring previous memory policy: 4 00:05:03.268 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.268 EAL: request: mp_malloc_sync 00:05:03.268 EAL: No shared files mode enabled, IPC is disabled 00:05:03.268 EAL: Heap on socket 0 was expanded by 514MB 00:05:03.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.525 EAL: request: mp_malloc_sync 00:05:03.525 EAL: No shared files mode enabled, IPC is disabled 00:05:03.525 EAL: Heap on socket 0 was shrunk by 514MB 00:05:03.525 EAL: Trying to obtain current memory policy. 00:05:03.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.784 EAL: Restoring previous memory policy: 4 00:05:03.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.784 EAL: request: mp_malloc_sync 00:05:03.784 EAL: No shared files mode enabled, IPC is disabled 00:05:03.784 EAL: Heap on socket 0 was expanded by 1026MB 00:05:04.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.299 EAL: request: mp_malloc_sync 00:05:04.299 EAL: No shared files mode enabled, IPC is disabled 00:05:04.299 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:04.299 passed 00:05:04.299 00:05:04.299 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.299 suites 1 1 n/a 0 0 00:05:04.299 tests 2 2 2 0 0 00:05:04.299 asserts 497 497 497 0 n/a 00:05:04.299 00:05:04.299 Elapsed time = 1.387 seconds 00:05:04.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.299 EAL: request: mp_malloc_sync 00:05:04.299 EAL: No shared files mode enabled, IPC is disabled 00:05:04.299 EAL: Heap on socket 0 was shrunk by 2MB 00:05:04.299 EAL: No shared files mode enabled, IPC is disabled 00:05:04.299 EAL: No shared files mode enabled, IPC is disabled 00:05:04.299 EAL: No shared files mode enabled, IPC is disabled 00:05:04.299 00:05:04.299 real 0m1.508s 00:05:04.299 user 0m0.878s 00:05:04.299 sys 0m0.593s 00:05:04.299 18:35:14 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.299 18:35:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:04.299 ************************************ 00:05:04.299 END TEST env_vtophys 00:05:04.299 ************************************ 00:05:04.299 18:35:14 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:04.299 18:35:14 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.299 18:35:14 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.300 18:35:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.300 ************************************ 00:05:04.300 START TEST env_pci 00:05:04.300 ************************************ 00:05:04.300 18:35:14 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:04.300 00:05:04.300 00:05:04.300 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.300 http://cunit.sourceforge.net/ 00:05:04.300 00:05:04.300 00:05:04.300 Suite: pci 00:05:04.300 Test: pci_hook ...[2024-07-20 18:35:14.567718] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1252959 has claimed it 00:05:04.300 EAL: Cannot find device (10000:00:01.0) 00:05:04.300 EAL: Failed to attach device on primary process 00:05:04.300 passed 00:05:04.300 00:05:04.300 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.300 suites 1 1 n/a 0 0 00:05:04.300 tests 1 1 1 0 0 00:05:04.300 asserts 25 25 25 0 n/a 00:05:04.300 00:05:04.300 Elapsed time = 0.022 seconds 00:05:04.300 00:05:04.300 real 0m0.034s 00:05:04.300 user 0m0.010s 00:05:04.300 sys 0m0.024s 00:05:04.300 18:35:14 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.300 18:35:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:04.300 ************************************ 00:05:04.300 END TEST env_pci 00:05:04.300 ************************************ 00:05:04.300 18:35:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:04.300 18:35:14 env -- env/env.sh@15 -- # uname 00:05:04.300 18:35:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:04.300 18:35:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:04.300 18:35:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:04.300 18:35:14 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:04.300 18:35:14 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.300 18:35:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.558 ************************************ 00:05:04.558 START TEST env_dpdk_post_init 00:05:04.558 ************************************ 00:05:04.558 18:35:14 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:04.558 EAL: Detected CPU lcores: 48 00:05:04.558 EAL: Detected NUMA nodes: 2 00:05:04.558 EAL: Detected shared linkage of DPDK 00:05:04.558 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:04.558 EAL: Selected IOVA mode 'VA' 00:05:04.558 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.558 EAL: VFIO support initialized 00:05:04.558 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:04.558 EAL: Using IOMMU type 1 (Type 1) 00:05:04.558 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:04.558 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:04.558 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:04.558 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:04.558 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:04.558 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:04.558 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:04.558 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:04.558 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:04.558 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:04.558 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:04.816 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:04.816 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:04.816 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:04.816 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:04.816 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:05.382 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:08.656 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:08.656 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:08.914 Starting DPDK initialization... 00:05:08.914 Starting SPDK post initialization... 00:05:08.914 SPDK NVMe probe 00:05:08.914 Attaching to 0000:88:00.0 00:05:08.914 Attached to 0000:88:00.0 00:05:08.914 Cleaning up... 00:05:08.914 00:05:08.914 real 0m4.435s 00:05:08.914 user 0m3.307s 00:05:08.914 sys 0m0.185s 00:05:08.914 18:35:19 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.914 18:35:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:08.914 ************************************ 00:05:08.914 END TEST env_dpdk_post_init 00:05:08.914 ************************************ 00:05:08.914 18:35:19 env -- env/env.sh@26 -- # uname 00:05:08.914 18:35:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:08.914 18:35:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.914 18:35:19 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.914 18:35:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.914 18:35:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.914 ************************************ 00:05:08.914 START TEST env_mem_callbacks 00:05:08.914 ************************************ 00:05:08.914 18:35:19 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.914 EAL: Detected CPU lcores: 48 00:05:08.914 EAL: Detected NUMA nodes: 2 00:05:08.914 EAL: Detected shared linkage of DPDK 00:05:08.914 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.914 EAL: Selected IOVA mode 'VA' 00:05:08.914 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.914 EAL: VFIO support initialized 00:05:08.914 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.914 00:05:08.914 00:05:08.914 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.914 http://cunit.sourceforge.net/ 00:05:08.914 00:05:08.914 00:05:08.914 Suite: memory 00:05:08.914 Test: test ... 00:05:08.914 register 0x200000200000 2097152 00:05:08.914 malloc 3145728 00:05:08.914 register 0x200000400000 4194304 00:05:08.914 buf 0x200000500000 len 3145728 PASSED 00:05:08.914 malloc 64 00:05:08.914 buf 0x2000004fff40 len 64 PASSED 00:05:08.914 malloc 4194304 00:05:08.914 register 0x200000800000 6291456 00:05:08.914 buf 0x200000a00000 len 4194304 PASSED 00:05:08.914 free 0x200000500000 3145728 00:05:08.914 free 0x2000004fff40 64 00:05:08.914 unregister 0x200000400000 4194304 PASSED 00:05:08.914 free 0x200000a00000 4194304 00:05:08.914 unregister 0x200000800000 6291456 PASSED 00:05:08.914 malloc 8388608 00:05:08.914 register 0x200000400000 10485760 00:05:08.914 buf 0x200000600000 len 8388608 PASSED 00:05:08.914 free 0x200000600000 8388608 00:05:08.914 unregister 0x200000400000 10485760 PASSED 00:05:08.914 passed 00:05:08.914 00:05:08.914 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.914 suites 1 1 n/a 0 0 00:05:08.914 tests 1 1 1 0 0 00:05:08.914 asserts 15 15 15 0 n/a 00:05:08.914 00:05:08.914 Elapsed time = 0.005 seconds 00:05:08.914 00:05:08.914 real 0m0.047s 00:05:08.914 user 0m0.013s 00:05:08.914 sys 0m0.034s 00:05:08.914 18:35:19 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.914 18:35:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:08.914 ************************************ 00:05:08.914 END TEST env_mem_callbacks 00:05:08.914 ************************************ 00:05:08.914 00:05:08.914 real 0m6.455s 00:05:08.914 user 0m4.464s 00:05:08.914 sys 0m1.029s 00:05:08.914 18:35:19 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.914 18:35:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.914 ************************************ 00:05:08.914 END TEST env 00:05:08.914 ************************************ 00:05:08.914 18:35:19 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:08.914 18:35:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.914 18:35:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.914 18:35:19 -- common/autotest_common.sh@10 -- # set +x 00:05:08.914 ************************************ 00:05:08.914 START TEST rpc 00:05:08.914 ************************************ 00:05:08.914 18:35:19 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:09.171 * Looking for test storage... 00:05:09.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:09.171 18:35:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1253611 00:05:09.171 18:35:19 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:09.171 18:35:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.171 18:35:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1253611 00:05:09.171 18:35:19 rpc -- common/autotest_common.sh@827 -- # '[' -z 1253611 ']' 00:05:09.171 18:35:19 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.171 18:35:19 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:09.171 18:35:19 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.171 18:35:19 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:09.171 18:35:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.171 [2024-07-20 18:35:19.335766] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:09.171 [2024-07-20 18:35:19.335884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1253611 ] 00:05:09.171 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.171 [2024-07-20 18:35:19.395308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.171 [2024-07-20 18:35:19.480510] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:09.171 [2024-07-20 18:35:19.480579] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1253611' to capture a snapshot of events at runtime. 00:05:09.171 [2024-07-20 18:35:19.480592] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:09.171 [2024-07-20 18:35:19.480612] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:09.171 [2024-07-20 18:35:19.480621] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1253611 for offline analysis/debug. 00:05:09.171 [2024-07-20 18:35:19.480654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.427 18:35:19 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:09.427 18:35:19 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:09.427 18:35:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:09.427 18:35:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:09.427 18:35:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:09.427 18:35:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:09.427 18:35:19 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.427 18:35:19 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.427 18:35:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.685 ************************************ 00:05:09.685 START TEST rpc_integrity 00:05:09.685 ************************************ 00:05:09.685 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:09.685 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.685 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.685 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.685 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.685 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.685 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:09.685 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.685 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.685 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.685 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.685 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.685 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:09.685 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.685 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.685 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.685 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.685 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.685 { 00:05:09.685 "name": "Malloc0", 00:05:09.685 "aliases": [ 00:05:09.685 "6b4cc36a-bc02-4df5-8ad4-d7d446f68d43" 00:05:09.685 ], 00:05:09.685 "product_name": "Malloc disk", 00:05:09.685 "block_size": 512, 00:05:09.685 "num_blocks": 16384, 00:05:09.685 "uuid": "6b4cc36a-bc02-4df5-8ad4-d7d446f68d43", 00:05:09.685 "assigned_rate_limits": { 00:05:09.685 "rw_ios_per_sec": 0, 00:05:09.685 "rw_mbytes_per_sec": 0, 00:05:09.685 "r_mbytes_per_sec": 0, 00:05:09.685 "w_mbytes_per_sec": 0 00:05:09.685 }, 00:05:09.685 "claimed": false, 00:05:09.685 "zoned": false, 00:05:09.685 "supported_io_types": { 00:05:09.685 "read": true, 00:05:09.685 "write": true, 00:05:09.685 "unmap": true, 00:05:09.685 "write_zeroes": true, 00:05:09.685 "flush": true, 00:05:09.685 "reset": true, 00:05:09.685 "compare": false, 00:05:09.685 "compare_and_write": false, 00:05:09.685 "abort": true, 00:05:09.685 "nvme_admin": false, 00:05:09.685 "nvme_io": false 00:05:09.685 }, 00:05:09.686 "memory_domains": [ 00:05:09.686 { 00:05:09.686 "dma_device_id": "system", 00:05:09.686 "dma_device_type": 1 00:05:09.686 }, 00:05:09.686 { 00:05:09.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.686 "dma_device_type": 2 00:05:09.686 } 00:05:09.686 ], 00:05:09.686 "driver_specific": {} 00:05:09.686 } 00:05:09.686 ]' 00:05:09.686 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:09.686 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.686 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.686 [2024-07-20 18:35:19.871956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:09.686 [2024-07-20 18:35:19.872002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.686 [2024-07-20 18:35:19.872025] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1864d60 00:05:09.686 [2024-07-20 18:35:19.872039] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.686 [2024-07-20 18:35:19.873548] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.686 [2024-07-20 18:35:19.873577] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.686 Passthru0 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.686 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.686 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.686 { 00:05:09.686 "name": "Malloc0", 00:05:09.686 "aliases": [ 00:05:09.686 "6b4cc36a-bc02-4df5-8ad4-d7d446f68d43" 00:05:09.686 ], 00:05:09.686 "product_name": "Malloc disk", 00:05:09.686 "block_size": 512, 00:05:09.686 "num_blocks": 16384, 00:05:09.686 "uuid": "6b4cc36a-bc02-4df5-8ad4-d7d446f68d43", 00:05:09.686 "assigned_rate_limits": { 00:05:09.686 "rw_ios_per_sec": 0, 00:05:09.686 "rw_mbytes_per_sec": 0, 00:05:09.686 "r_mbytes_per_sec": 0, 00:05:09.686 "w_mbytes_per_sec": 0 00:05:09.686 }, 00:05:09.686 "claimed": true, 00:05:09.686 "claim_type": "exclusive_write", 00:05:09.686 "zoned": false, 00:05:09.686 "supported_io_types": { 00:05:09.686 "read": true, 00:05:09.686 "write": true, 00:05:09.686 "unmap": true, 00:05:09.686 "write_zeroes": true, 00:05:09.686 "flush": true, 00:05:09.686 "reset": true, 00:05:09.686 "compare": false, 00:05:09.686 "compare_and_write": false, 00:05:09.686 "abort": true, 00:05:09.686 "nvme_admin": false, 00:05:09.686 "nvme_io": false 00:05:09.686 }, 00:05:09.686 "memory_domains": [ 00:05:09.686 { 00:05:09.686 "dma_device_id": "system", 00:05:09.686 "dma_device_type": 1 00:05:09.686 }, 00:05:09.686 { 00:05:09.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.686 "dma_device_type": 2 00:05:09.686 } 00:05:09.686 ], 00:05:09.686 "driver_specific": {} 00:05:09.686 }, 00:05:09.686 { 00:05:09.686 "name": "Passthru0", 00:05:09.686 "aliases": [ 00:05:09.686 "8787ad48-a05e-5891-93f4-b9775493350a" 00:05:09.686 ], 00:05:09.686 "product_name": "passthru", 00:05:09.686 "block_size": 512, 00:05:09.686 "num_blocks": 16384, 00:05:09.686 "uuid": "8787ad48-a05e-5891-93f4-b9775493350a", 00:05:09.686 "assigned_rate_limits": { 00:05:09.686 "rw_ios_per_sec": 0, 00:05:09.686 "rw_mbytes_per_sec": 0, 00:05:09.686 "r_mbytes_per_sec": 0, 00:05:09.686 "w_mbytes_per_sec": 0 00:05:09.686 }, 00:05:09.686 "claimed": false, 00:05:09.686 "zoned": false, 00:05:09.686 "supported_io_types": { 00:05:09.686 "read": true, 00:05:09.686 "write": true, 00:05:09.686 "unmap": true, 00:05:09.686 "write_zeroes": true, 00:05:09.686 "flush": true, 00:05:09.686 "reset": true, 00:05:09.686 "compare": false, 00:05:09.686 "compare_and_write": false, 00:05:09.686 "abort": true, 00:05:09.686 "nvme_admin": false, 00:05:09.686 "nvme_io": false 00:05:09.686 }, 00:05:09.686 "memory_domains": [ 00:05:09.686 { 00:05:09.686 "dma_device_id": "system", 00:05:09.686 "dma_device_type": 1 00:05:09.686 }, 00:05:09.686 { 00:05:09.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.686 "dma_device_type": 2 00:05:09.686 } 00:05:09.686 ], 00:05:09.686 "driver_specific": { 00:05:09.686 "passthru": { 00:05:09.686 "name": "Passthru0", 00:05:09.686 "base_bdev_name": "Malloc0" 00:05:09.686 } 00:05:09.686 } 00:05:09.686 } 00:05:09.686 ]' 00:05:09.686 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:09.686 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:09.686 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.686 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.686 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.686 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:09.686 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:09.686 18:35:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:09.686 00:05:09.686 real 0m0.235s 00:05:09.686 user 0m0.147s 00:05:09.686 sys 0m0.026s 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.686 18:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.686 ************************************ 00:05:09.686 END TEST rpc_integrity 00:05:09.686 ************************************ 00:05:09.944 18:35:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:09.944 18:35:20 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.944 18:35:20 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.944 18:35:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.944 ************************************ 00:05:09.944 START TEST rpc_plugins 00:05:09.944 ************************************ 00:05:09.944 18:35:20 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:09.944 18:35:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:09.944 18:35:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.944 18:35:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.944 18:35:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.944 18:35:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:09.944 18:35:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:09.944 18:35:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.944 18:35:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.944 18:35:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.944 18:35:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:09.944 { 00:05:09.944 "name": "Malloc1", 00:05:09.944 "aliases": [ 00:05:09.944 "70e88e30-ef13-4848-8262-ea9326441068" 00:05:09.944 ], 00:05:09.944 "product_name": "Malloc disk", 00:05:09.944 "block_size": 4096, 00:05:09.944 "num_blocks": 256, 00:05:09.944 "uuid": "70e88e30-ef13-4848-8262-ea9326441068", 00:05:09.944 "assigned_rate_limits": { 00:05:09.944 "rw_ios_per_sec": 0, 00:05:09.944 "rw_mbytes_per_sec": 0, 00:05:09.944 "r_mbytes_per_sec": 0, 00:05:09.944 "w_mbytes_per_sec": 0 00:05:09.944 }, 00:05:09.944 "claimed": false, 00:05:09.944 "zoned": false, 00:05:09.944 "supported_io_types": { 00:05:09.944 "read": true, 00:05:09.944 "write": true, 00:05:09.944 "unmap": true, 00:05:09.944 "write_zeroes": true, 00:05:09.944 "flush": true, 00:05:09.944 "reset": true, 00:05:09.944 "compare": false, 00:05:09.944 "compare_and_write": false, 00:05:09.944 "abort": true, 00:05:09.944 "nvme_admin": false, 00:05:09.944 "nvme_io": false 00:05:09.944 }, 00:05:09.944 "memory_domains": [ 00:05:09.944 { 00:05:09.944 "dma_device_id": "system", 00:05:09.944 "dma_device_type": 1 00:05:09.944 }, 00:05:09.944 { 00:05:09.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.944 "dma_device_type": 2 00:05:09.944 } 00:05:09.944 ], 00:05:09.944 "driver_specific": {} 00:05:09.944 } 00:05:09.944 ]' 00:05:09.944 18:35:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:09.944 18:35:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:09.944 18:35:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:09.944 18:35:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.944 18:35:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.944 18:35:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.944 18:35:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:09.944 18:35:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.944 18:35:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.944 18:35:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.944 18:35:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:09.944 18:35:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:09.944 18:35:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:09.944 00:05:09.944 real 0m0.124s 00:05:09.944 user 0m0.083s 00:05:09.944 sys 0m0.007s 00:05:09.944 18:35:20 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.944 18:35:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.944 ************************************ 00:05:09.944 END TEST rpc_plugins 00:05:09.944 ************************************ 00:05:09.944 18:35:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:09.944 18:35:20 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.944 18:35:20 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.944 18:35:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.944 ************************************ 00:05:09.944 START TEST rpc_trace_cmd_test 00:05:09.944 ************************************ 00:05:09.944 18:35:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:09.944 18:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:09.944 18:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:09.944 18:35:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.944 18:35:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:09.944 18:35:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.944 18:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:09.944 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1253611", 00:05:09.944 "tpoint_group_mask": "0x8", 00:05:09.944 "iscsi_conn": { 00:05:09.944 "mask": "0x2", 00:05:09.944 "tpoint_mask": "0x0" 00:05:09.944 }, 00:05:09.944 "scsi": { 00:05:09.944 "mask": "0x4", 00:05:09.944 "tpoint_mask": "0x0" 00:05:09.944 }, 00:05:09.944 "bdev": { 00:05:09.944 "mask": "0x8", 00:05:09.944 "tpoint_mask": "0xffffffffffffffff" 00:05:09.944 }, 00:05:09.944 "nvmf_rdma": { 00:05:09.944 "mask": "0x10", 00:05:09.944 "tpoint_mask": "0x0" 00:05:09.944 }, 00:05:09.944 "nvmf_tcp": { 00:05:09.944 "mask": "0x20", 00:05:09.944 "tpoint_mask": "0x0" 00:05:09.944 }, 00:05:09.944 "ftl": { 00:05:09.944 "mask": "0x40", 00:05:09.944 "tpoint_mask": "0x0" 00:05:09.944 }, 00:05:09.944 "blobfs": { 00:05:09.944 "mask": "0x80", 00:05:09.944 "tpoint_mask": "0x0" 00:05:09.944 }, 00:05:09.944 "dsa": { 00:05:09.944 "mask": "0x200", 00:05:09.944 "tpoint_mask": "0x0" 00:05:09.944 }, 00:05:09.944 "thread": { 00:05:09.944 "mask": "0x400", 00:05:09.944 "tpoint_mask": "0x0" 00:05:09.944 }, 00:05:09.944 "nvme_pcie": { 00:05:09.944 "mask": "0x800", 00:05:09.944 "tpoint_mask": "0x0" 00:05:09.944 }, 00:05:09.944 "iaa": { 00:05:09.944 "mask": "0x1000", 00:05:09.944 "tpoint_mask": "0x0" 00:05:09.944 }, 00:05:09.944 "nvme_tcp": { 00:05:09.944 "mask": "0x2000", 00:05:09.944 "tpoint_mask": "0x0" 00:05:09.944 }, 00:05:09.944 "bdev_nvme": { 00:05:09.944 "mask": "0x4000", 00:05:09.944 "tpoint_mask": "0x0" 00:05:09.944 }, 00:05:09.944 "sock": { 00:05:09.944 "mask": "0x8000", 00:05:09.944 "tpoint_mask": "0x0" 00:05:09.944 } 00:05:09.944 }' 00:05:09.944 18:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:09.944 18:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:09.944 18:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:10.202 18:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:10.202 18:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:10.202 18:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:10.202 18:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:10.202 18:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:10.202 18:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:10.202 18:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:10.202 00:05:10.202 real 0m0.202s 00:05:10.202 user 0m0.172s 00:05:10.202 sys 0m0.020s 00:05:10.202 18:35:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.202 18:35:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:10.202 ************************************ 00:05:10.202 END TEST rpc_trace_cmd_test 00:05:10.202 ************************************ 00:05:10.202 18:35:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:10.202 18:35:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:10.202 18:35:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:10.202 18:35:20 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.202 18:35:20 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.202 18:35:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.202 ************************************ 00:05:10.202 START TEST rpc_daemon_integrity 00:05:10.202 ************************************ 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.202 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.459 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.459 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:10.459 { 00:05:10.459 "name": "Malloc2", 00:05:10.459 "aliases": [ 00:05:10.459 "6ed67668-a723-4b36-943a-49c9ad0fac98" 00:05:10.459 ], 00:05:10.459 "product_name": "Malloc disk", 00:05:10.459 "block_size": 512, 00:05:10.459 "num_blocks": 16384, 00:05:10.459 "uuid": "6ed67668-a723-4b36-943a-49c9ad0fac98", 00:05:10.460 "assigned_rate_limits": { 00:05:10.460 "rw_ios_per_sec": 0, 00:05:10.460 "rw_mbytes_per_sec": 0, 00:05:10.460 "r_mbytes_per_sec": 0, 00:05:10.460 "w_mbytes_per_sec": 0 00:05:10.460 }, 00:05:10.460 "claimed": false, 00:05:10.460 "zoned": false, 00:05:10.460 "supported_io_types": { 00:05:10.460 "read": true, 00:05:10.460 "write": true, 00:05:10.460 "unmap": true, 00:05:10.460 "write_zeroes": true, 00:05:10.460 "flush": true, 00:05:10.460 "reset": true, 00:05:10.460 "compare": false, 00:05:10.460 "compare_and_write": false, 00:05:10.460 "abort": true, 00:05:10.460 "nvme_admin": false, 00:05:10.460 "nvme_io": false 00:05:10.460 }, 00:05:10.460 "memory_domains": [ 00:05:10.460 { 00:05:10.460 "dma_device_id": "system", 00:05:10.460 "dma_device_type": 1 00:05:10.460 }, 00:05:10.460 { 00:05:10.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.460 "dma_device_type": 2 00:05:10.460 } 00:05:10.460 ], 00:05:10.460 "driver_specific": {} 00:05:10.460 } 00:05:10.460 ]' 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.460 [2024-07-20 18:35:20.566141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:10.460 [2024-07-20 18:35:20.566191] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:10.460 [2024-07-20 18:35:20.566216] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a16420 00:05:10.460 [2024-07-20 18:35:20.566232] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:10.460 [2024-07-20 18:35:20.567598] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:10.460 [2024-07-20 18:35:20.567636] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:10.460 Passthru0 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:10.460 { 00:05:10.460 "name": "Malloc2", 00:05:10.460 "aliases": [ 00:05:10.460 "6ed67668-a723-4b36-943a-49c9ad0fac98" 00:05:10.460 ], 00:05:10.460 "product_name": "Malloc disk", 00:05:10.460 "block_size": 512, 00:05:10.460 "num_blocks": 16384, 00:05:10.460 "uuid": "6ed67668-a723-4b36-943a-49c9ad0fac98", 00:05:10.460 "assigned_rate_limits": { 00:05:10.460 "rw_ios_per_sec": 0, 00:05:10.460 "rw_mbytes_per_sec": 0, 00:05:10.460 "r_mbytes_per_sec": 0, 00:05:10.460 "w_mbytes_per_sec": 0 00:05:10.460 }, 00:05:10.460 "claimed": true, 00:05:10.460 "claim_type": "exclusive_write", 00:05:10.460 "zoned": false, 00:05:10.460 "supported_io_types": { 00:05:10.460 "read": true, 00:05:10.460 "write": true, 00:05:10.460 "unmap": true, 00:05:10.460 "write_zeroes": true, 00:05:10.460 "flush": true, 00:05:10.460 "reset": true, 00:05:10.460 "compare": false, 00:05:10.460 "compare_and_write": false, 00:05:10.460 "abort": true, 00:05:10.460 "nvme_admin": false, 00:05:10.460 "nvme_io": false 00:05:10.460 }, 00:05:10.460 "memory_domains": [ 00:05:10.460 { 00:05:10.460 "dma_device_id": "system", 00:05:10.460 "dma_device_type": 1 00:05:10.460 }, 00:05:10.460 { 00:05:10.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.460 "dma_device_type": 2 00:05:10.460 } 00:05:10.460 ], 00:05:10.460 "driver_specific": {} 00:05:10.460 }, 00:05:10.460 { 00:05:10.460 "name": "Passthru0", 00:05:10.460 "aliases": [ 00:05:10.460 "5281e0ee-5572-554e-8c0d-dc6bb0825665" 00:05:10.460 ], 00:05:10.460 "product_name": "passthru", 00:05:10.460 "block_size": 512, 00:05:10.460 "num_blocks": 16384, 00:05:10.460 "uuid": "5281e0ee-5572-554e-8c0d-dc6bb0825665", 00:05:10.460 "assigned_rate_limits": { 00:05:10.460 "rw_ios_per_sec": 0, 00:05:10.460 "rw_mbytes_per_sec": 0, 00:05:10.460 "r_mbytes_per_sec": 0, 00:05:10.460 "w_mbytes_per_sec": 0 00:05:10.460 }, 00:05:10.460 "claimed": false, 00:05:10.460 "zoned": false, 00:05:10.460 "supported_io_types": { 00:05:10.460 "read": true, 00:05:10.460 "write": true, 00:05:10.460 "unmap": true, 00:05:10.460 "write_zeroes": true, 00:05:10.460 "flush": true, 00:05:10.460 "reset": true, 00:05:10.460 "compare": false, 00:05:10.460 "compare_and_write": false, 00:05:10.460 "abort": true, 00:05:10.460 "nvme_admin": false, 00:05:10.460 "nvme_io": false 00:05:10.460 }, 00:05:10.460 "memory_domains": [ 00:05:10.460 { 00:05:10.460 "dma_device_id": "system", 00:05:10.460 "dma_device_type": 1 00:05:10.460 }, 00:05:10.460 { 00:05:10.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.460 "dma_device_type": 2 00:05:10.460 } 00:05:10.460 ], 00:05:10.460 "driver_specific": { 00:05:10.460 "passthru": { 00:05:10.460 "name": "Passthru0", 00:05:10.460 "base_bdev_name": "Malloc2" 00:05:10.460 } 00:05:10.460 } 00:05:10.460 } 00:05:10.460 ]' 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.460 00:05:10.460 real 0m0.221s 00:05:10.460 user 0m0.146s 00:05:10.460 sys 0m0.020s 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.460 18:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.460 ************************************ 00:05:10.460 END TEST rpc_daemon_integrity 00:05:10.460 ************************************ 00:05:10.460 18:35:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:10.460 18:35:20 rpc -- rpc/rpc.sh@84 -- # killprocess 1253611 00:05:10.460 18:35:20 rpc -- common/autotest_common.sh@946 -- # '[' -z 1253611 ']' 00:05:10.460 18:35:20 rpc -- common/autotest_common.sh@950 -- # kill -0 1253611 00:05:10.460 18:35:20 rpc -- common/autotest_common.sh@951 -- # uname 00:05:10.460 18:35:20 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:10.460 18:35:20 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1253611 00:05:10.460 18:35:20 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:10.460 18:35:20 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:10.460 18:35:20 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1253611' 00:05:10.460 killing process with pid 1253611 00:05:10.460 18:35:20 rpc -- common/autotest_common.sh@965 -- # kill 1253611 00:05:10.460 18:35:20 rpc -- common/autotest_common.sh@970 -- # wait 1253611 00:05:11.025 00:05:11.025 real 0m1.895s 00:05:11.025 user 0m2.387s 00:05:11.025 sys 0m0.586s 00:05:11.025 18:35:21 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.025 18:35:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.025 ************************************ 00:05:11.025 END TEST rpc 00:05:11.025 ************************************ 00:05:11.025 18:35:21 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:11.025 18:35:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.025 18:35:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.025 18:35:21 -- common/autotest_common.sh@10 -- # set +x 00:05:11.025 ************************************ 00:05:11.025 START TEST skip_rpc 00:05:11.025 ************************************ 00:05:11.025 18:35:21 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:11.025 * Looking for test storage... 00:05:11.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.025 18:35:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:11.026 18:35:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:11.026 18:35:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:11.026 18:35:21 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.026 18:35:21 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.026 18:35:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.026 ************************************ 00:05:11.026 START TEST skip_rpc 00:05:11.026 ************************************ 00:05:11.026 18:35:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:11.026 18:35:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1254044 00:05:11.026 18:35:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:11.026 18:35:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.026 18:35:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:11.026 [2024-07-20 18:35:21.305847] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:11.026 [2024-07-20 18:35:21.305911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254044 ] 00:05:11.026 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.283 [2024-07-20 18:35:21.366576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.283 [2024-07-20 18:35:21.456921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1254044 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 1254044 ']' 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 1254044 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1254044 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1254044' 00:05:16.538 killing process with pid 1254044 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 1254044 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 1254044 00:05:16.538 00:05:16.538 real 0m5.438s 00:05:16.538 user 0m5.122s 00:05:16.538 sys 0m0.321s 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.538 18:35:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.538 ************************************ 00:05:16.538 END TEST skip_rpc 00:05:16.538 ************************************ 00:05:16.538 18:35:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:16.538 18:35:26 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.538 18:35:26 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.538 18:35:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.538 ************************************ 00:05:16.538 START TEST skip_rpc_with_json 00:05:16.538 ************************************ 00:05:16.538 18:35:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:16.538 18:35:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:16.538 18:35:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1254737 00:05:16.538 18:35:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.538 18:35:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.538 18:35:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1254737 00:05:16.538 18:35:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 1254737 ']' 00:05:16.538 18:35:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.538 18:35:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:16.538 18:35:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.538 18:35:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:16.538 18:35:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.538 [2024-07-20 18:35:26.789629] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:16.538 [2024-07-20 18:35:26.789711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254737 ] 00:05:16.538 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.538 [2024-07-20 18:35:26.847803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.796 [2024-07-20 18:35:26.937789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.056 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:17.057 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:17.057 18:35:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:17.057 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.057 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.057 [2024-07-20 18:35:27.189430] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:17.057 request: 00:05:17.057 { 00:05:17.057 "trtype": "tcp", 00:05:17.057 "method": "nvmf_get_transports", 00:05:17.057 "req_id": 1 00:05:17.057 } 00:05:17.057 Got JSON-RPC error response 00:05:17.057 response: 00:05:17.057 { 00:05:17.057 "code": -19, 00:05:17.057 "message": "No such device" 00:05:17.057 } 00:05:17.057 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:17.057 18:35:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:17.057 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.057 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.057 [2024-07-20 18:35:27.197558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.057 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.057 18:35:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:17.057 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.057 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.057 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.057 18:35:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:17.057 { 00:05:17.057 "subsystems": [ 00:05:17.057 { 00:05:17.057 "subsystem": "vfio_user_target", 00:05:17.057 "config": null 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "subsystem": "keyring", 00:05:17.057 "config": [] 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "subsystem": "iobuf", 00:05:17.057 "config": [ 00:05:17.057 { 00:05:17.057 "method": "iobuf_set_options", 00:05:17.057 "params": { 00:05:17.057 "small_pool_count": 8192, 00:05:17.057 "large_pool_count": 1024, 00:05:17.057 "small_bufsize": 8192, 00:05:17.057 "large_bufsize": 135168 00:05:17.057 } 00:05:17.057 } 00:05:17.057 ] 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "subsystem": "sock", 00:05:17.057 "config": [ 00:05:17.057 { 00:05:17.057 "method": "sock_set_default_impl", 00:05:17.057 "params": { 00:05:17.057 "impl_name": "posix" 00:05:17.057 } 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "method": "sock_impl_set_options", 00:05:17.057 "params": { 00:05:17.057 "impl_name": "ssl", 00:05:17.057 "recv_buf_size": 4096, 00:05:17.057 "send_buf_size": 4096, 00:05:17.057 "enable_recv_pipe": true, 00:05:17.057 "enable_quickack": false, 00:05:17.057 "enable_placement_id": 0, 00:05:17.057 "enable_zerocopy_send_server": true, 00:05:17.057 "enable_zerocopy_send_client": false, 00:05:17.057 "zerocopy_threshold": 0, 00:05:17.057 "tls_version": 0, 00:05:17.057 "enable_ktls": false 00:05:17.057 } 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "method": "sock_impl_set_options", 00:05:17.057 "params": { 00:05:17.057 "impl_name": "posix", 00:05:17.057 "recv_buf_size": 2097152, 00:05:17.057 "send_buf_size": 2097152, 00:05:17.057 "enable_recv_pipe": true, 00:05:17.057 "enable_quickack": false, 00:05:17.057 "enable_placement_id": 0, 00:05:17.057 "enable_zerocopy_send_server": true, 00:05:17.057 "enable_zerocopy_send_client": false, 00:05:17.057 "zerocopy_threshold": 0, 00:05:17.057 "tls_version": 0, 00:05:17.057 "enable_ktls": false 00:05:17.057 } 00:05:17.057 } 00:05:17.057 ] 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "subsystem": "vmd", 00:05:17.057 "config": [] 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "subsystem": "accel", 00:05:17.057 "config": [ 00:05:17.057 { 00:05:17.057 "method": "accel_set_options", 00:05:17.057 "params": { 00:05:17.057 "small_cache_size": 128, 00:05:17.057 "large_cache_size": 16, 00:05:17.057 "task_count": 2048, 00:05:17.057 "sequence_count": 2048, 00:05:17.057 "buf_count": 2048 00:05:17.057 } 00:05:17.057 } 00:05:17.057 ] 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "subsystem": "bdev", 00:05:17.057 "config": [ 00:05:17.057 { 00:05:17.057 "method": "bdev_set_options", 00:05:17.057 "params": { 00:05:17.057 "bdev_io_pool_size": 65535, 00:05:17.057 "bdev_io_cache_size": 256, 00:05:17.057 "bdev_auto_examine": true, 00:05:17.057 "iobuf_small_cache_size": 128, 00:05:17.057 "iobuf_large_cache_size": 16 00:05:17.057 } 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "method": "bdev_raid_set_options", 00:05:17.057 "params": { 00:05:17.057 "process_window_size_kb": 1024 00:05:17.057 } 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "method": "bdev_iscsi_set_options", 00:05:17.057 "params": { 00:05:17.057 "timeout_sec": 30 00:05:17.057 } 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "method": "bdev_nvme_set_options", 00:05:17.057 "params": { 00:05:17.057 "action_on_timeout": "none", 00:05:17.057 "timeout_us": 0, 00:05:17.057 "timeout_admin_us": 0, 00:05:17.057 "keep_alive_timeout_ms": 10000, 00:05:17.057 "arbitration_burst": 0, 00:05:17.057 "low_priority_weight": 0, 00:05:17.057 "medium_priority_weight": 0, 00:05:17.057 "high_priority_weight": 0, 00:05:17.057 "nvme_adminq_poll_period_us": 10000, 00:05:17.057 "nvme_ioq_poll_period_us": 0, 00:05:17.057 "io_queue_requests": 0, 00:05:17.057 "delay_cmd_submit": true, 00:05:17.057 "transport_retry_count": 4, 00:05:17.057 "bdev_retry_count": 3, 00:05:17.057 "transport_ack_timeout": 0, 00:05:17.057 "ctrlr_loss_timeout_sec": 0, 00:05:17.057 "reconnect_delay_sec": 0, 00:05:17.057 "fast_io_fail_timeout_sec": 0, 00:05:17.057 "disable_auto_failback": false, 00:05:17.057 "generate_uuids": false, 00:05:17.057 "transport_tos": 0, 00:05:17.057 "nvme_error_stat": false, 00:05:17.057 "rdma_srq_size": 0, 00:05:17.057 "io_path_stat": false, 00:05:17.057 "allow_accel_sequence": false, 00:05:17.057 "rdma_max_cq_size": 0, 00:05:17.057 "rdma_cm_event_timeout_ms": 0, 00:05:17.057 "dhchap_digests": [ 00:05:17.057 "sha256", 00:05:17.057 "sha384", 00:05:17.057 "sha512" 00:05:17.057 ], 00:05:17.057 "dhchap_dhgroups": [ 00:05:17.057 "null", 00:05:17.057 "ffdhe2048", 00:05:17.057 "ffdhe3072", 00:05:17.057 "ffdhe4096", 00:05:17.057 "ffdhe6144", 00:05:17.057 "ffdhe8192" 00:05:17.057 ] 00:05:17.057 } 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "method": "bdev_nvme_set_hotplug", 00:05:17.057 "params": { 00:05:17.057 "period_us": 100000, 00:05:17.057 "enable": false 00:05:17.057 } 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "method": "bdev_wait_for_examine" 00:05:17.057 } 00:05:17.057 ] 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "subsystem": "scsi", 00:05:17.057 "config": null 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "subsystem": "scheduler", 00:05:17.057 "config": [ 00:05:17.057 { 00:05:17.057 "method": "framework_set_scheduler", 00:05:17.057 "params": { 00:05:17.057 "name": "static" 00:05:17.057 } 00:05:17.057 } 00:05:17.057 ] 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "subsystem": "vhost_scsi", 00:05:17.057 "config": [] 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "subsystem": "vhost_blk", 00:05:17.057 "config": [] 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "subsystem": "ublk", 00:05:17.057 "config": [] 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "subsystem": "nbd", 00:05:17.057 "config": [] 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "subsystem": "nvmf", 00:05:17.057 "config": [ 00:05:17.057 { 00:05:17.057 "method": "nvmf_set_config", 00:05:17.057 "params": { 00:05:17.057 "discovery_filter": "match_any", 00:05:17.057 "admin_cmd_passthru": { 00:05:17.057 "identify_ctrlr": false 00:05:17.057 } 00:05:17.057 } 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "method": "nvmf_set_max_subsystems", 00:05:17.057 "params": { 00:05:17.057 "max_subsystems": 1024 00:05:17.057 } 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "method": "nvmf_set_crdt", 00:05:17.057 "params": { 00:05:17.057 "crdt1": 0, 00:05:17.057 "crdt2": 0, 00:05:17.057 "crdt3": 0 00:05:17.057 } 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "method": "nvmf_create_transport", 00:05:17.057 "params": { 00:05:17.057 "trtype": "TCP", 00:05:17.057 "max_queue_depth": 128, 00:05:17.057 "max_io_qpairs_per_ctrlr": 127, 00:05:17.057 "in_capsule_data_size": 4096, 00:05:17.057 "max_io_size": 131072, 00:05:17.057 "io_unit_size": 131072, 00:05:17.057 "max_aq_depth": 128, 00:05:17.057 "num_shared_buffers": 511, 00:05:17.057 "buf_cache_size": 4294967295, 00:05:17.057 "dif_insert_or_strip": false, 00:05:17.057 "zcopy": false, 00:05:17.057 "c2h_success": true, 00:05:17.057 "sock_priority": 0, 00:05:17.057 "abort_timeout_sec": 1, 00:05:17.057 "ack_timeout": 0, 00:05:17.057 "data_wr_pool_size": 0 00:05:17.057 } 00:05:17.057 } 00:05:17.057 ] 00:05:17.057 }, 00:05:17.057 { 00:05:17.057 "subsystem": "iscsi", 00:05:17.057 "config": [ 00:05:17.057 { 00:05:17.057 "method": "iscsi_set_options", 00:05:17.057 "params": { 00:05:17.057 "node_base": "iqn.2016-06.io.spdk", 00:05:17.057 "max_sessions": 128, 00:05:17.058 "max_connections_per_session": 2, 00:05:17.058 "max_queue_depth": 64, 00:05:17.058 "default_time2wait": 2, 00:05:17.058 "default_time2retain": 20, 00:05:17.058 "first_burst_length": 8192, 00:05:17.058 "immediate_data": true, 00:05:17.058 "allow_duplicated_isid": false, 00:05:17.058 "error_recovery_level": 0, 00:05:17.058 "nop_timeout": 60, 00:05:17.058 "nop_in_interval": 30, 00:05:17.058 "disable_chap": false, 00:05:17.058 "require_chap": false, 00:05:17.058 "mutual_chap": false, 00:05:17.058 "chap_group": 0, 00:05:17.058 "max_large_datain_per_connection": 64, 00:05:17.058 "max_r2t_per_connection": 4, 00:05:17.058 "pdu_pool_size": 36864, 00:05:17.058 "immediate_data_pool_size": 16384, 00:05:17.058 "data_out_pool_size": 2048 00:05:17.058 } 00:05:17.058 } 00:05:17.058 ] 00:05:17.058 } 00:05:17.058 ] 00:05:17.058 } 00:05:17.058 18:35:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:17.058 18:35:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1254737 00:05:17.058 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1254737 ']' 00:05:17.058 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1254737 00:05:17.058 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:17.058 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:17.058 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1254737 00:05:17.316 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:17.316 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:17.316 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1254737' 00:05:17.316 killing process with pid 1254737 00:05:17.316 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1254737 00:05:17.316 18:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1254737 00:05:17.574 18:35:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1254877 00:05:17.574 18:35:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:17.574 18:35:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:22.863 18:35:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1254877 00:05:22.863 18:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1254877 ']' 00:05:22.863 18:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1254877 00:05:22.863 18:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:22.863 18:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:22.863 18:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1254877 00:05:22.863 18:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:22.863 18:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:22.863 18:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1254877' 00:05:22.863 killing process with pid 1254877 00:05:22.863 18:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1254877 00:05:22.863 18:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1254877 00:05:23.120 18:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:23.120 18:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:23.120 00:05:23.120 real 0m6.489s 00:05:23.120 user 0m6.088s 00:05:23.120 sys 0m0.690s 00:05:23.120 18:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.120 18:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.120 ************************************ 00:05:23.120 END TEST skip_rpc_with_json 00:05:23.120 ************************************ 00:05:23.120 18:35:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:23.120 18:35:33 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.120 18:35:33 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.120 18:35:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.120 ************************************ 00:05:23.120 START TEST skip_rpc_with_delay 00:05:23.120 ************************************ 00:05:23.120 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:23.120 18:35:33 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.121 [2024-07-20 18:35:33.327278] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:23.121 [2024-07-20 18:35:33.327379] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.121 00:05:23.121 real 0m0.068s 00:05:23.121 user 0m0.043s 00:05:23.121 sys 0m0.025s 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.121 18:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:23.121 ************************************ 00:05:23.121 END TEST skip_rpc_with_delay 00:05:23.121 ************************************ 00:05:23.121 18:35:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:23.121 18:35:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:23.121 18:35:33 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:23.121 18:35:33 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.121 18:35:33 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.121 18:35:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.121 ************************************ 00:05:23.121 START TEST exit_on_failed_rpc_init 00:05:23.121 ************************************ 00:05:23.121 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:23.121 18:35:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1255604 00:05:23.121 18:35:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.121 18:35:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1255604 00:05:23.121 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 1255604 ']' 00:05:23.121 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.121 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:23.121 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.121 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:23.121 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.121 [2024-07-20 18:35:33.438409] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:23.121 [2024-07-20 18:35:33.438493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255604 ] 00:05:23.378 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.379 [2024-07-20 18:35:33.497769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.379 [2024-07-20 18:35:33.585368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.636 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:23.636 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:23.636 18:35:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.636 18:35:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.636 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:23.636 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.636 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.636 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.636 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.636 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.636 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.636 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.636 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.636 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:23.636 18:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.636 [2024-07-20 18:35:33.891505] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:23.636 [2024-07-20 18:35:33.891579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255616 ] 00:05:23.636 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.637 [2024-07-20 18:35:33.953511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.893 [2024-07-20 18:35:34.047138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.893 [2024-07-20 18:35:34.047266] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:23.893 [2024-07-20 18:35:34.047285] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:23.893 [2024-07-20 18:35:34.047297] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1255604 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 1255604 ']' 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 1255604 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1255604 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1255604' 00:05:23.893 killing process with pid 1255604 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 1255604 00:05:23.893 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 1255604 00:05:24.458 00:05:24.458 real 0m1.192s 00:05:24.458 user 0m1.294s 00:05:24.458 sys 0m0.450s 00:05:24.458 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.458 18:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.458 ************************************ 00:05:24.458 END TEST exit_on_failed_rpc_init 00:05:24.458 ************************************ 00:05:24.458 18:35:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:24.458 00:05:24.458 real 0m13.422s 00:05:24.458 user 0m12.643s 00:05:24.458 sys 0m1.639s 00:05:24.458 18:35:34 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.458 18:35:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.458 ************************************ 00:05:24.458 END TEST skip_rpc 00:05:24.458 ************************************ 00:05:24.458 18:35:34 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:24.458 18:35:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.458 18:35:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.458 18:35:34 -- common/autotest_common.sh@10 -- # set +x 00:05:24.458 ************************************ 00:05:24.458 START TEST rpc_client 00:05:24.458 ************************************ 00:05:24.458 18:35:34 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:24.458 * Looking for test storage... 00:05:24.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:24.458 18:35:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:24.458 OK 00:05:24.458 18:35:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:24.458 00:05:24.458 real 0m0.069s 00:05:24.458 user 0m0.034s 00:05:24.458 sys 0m0.039s 00:05:24.458 18:35:34 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.458 18:35:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:24.458 ************************************ 00:05:24.458 END TEST rpc_client 00:05:24.458 ************************************ 00:05:24.458 18:35:34 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:24.458 18:35:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.458 18:35:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.458 18:35:34 -- common/autotest_common.sh@10 -- # set +x 00:05:24.458 ************************************ 00:05:24.458 START TEST json_config 00:05:24.458 ************************************ 00:05:24.458 18:35:34 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:24.716 18:35:34 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.716 18:35:34 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.716 18:35:34 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.716 18:35:34 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.716 18:35:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.716 18:35:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.716 18:35:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.716 18:35:34 json_config -- paths/export.sh@5 -- # export PATH 00:05:24.716 18:35:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@47 -- # : 0 00:05:24.716 18:35:34 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:24.717 18:35:34 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:24.717 18:35:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.717 18:35:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.717 18:35:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.717 18:35:34 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:24.717 18:35:34 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:24.717 18:35:34 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:24.717 INFO: JSON configuration test init 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:24.717 18:35:34 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:24.717 18:35:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:24.717 18:35:34 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:24.717 18:35:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.717 18:35:34 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:24.717 18:35:34 json_config -- json_config/common.sh@9 -- # local app=target 00:05:24.717 18:35:34 json_config -- json_config/common.sh@10 -- # shift 00:05:24.717 18:35:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.717 18:35:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.717 18:35:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.717 18:35:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.717 18:35:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.717 18:35:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1255860 00:05:24.717 18:35:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:24.717 18:35:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.717 Waiting for target to run... 00:05:24.717 18:35:34 json_config -- json_config/common.sh@25 -- # waitforlisten 1255860 /var/tmp/spdk_tgt.sock 00:05:24.717 18:35:34 json_config -- common/autotest_common.sh@827 -- # '[' -z 1255860 ']' 00:05:24.717 18:35:34 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.717 18:35:34 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:24.717 18:35:34 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.717 18:35:34 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:24.717 18:35:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.717 [2024-07-20 18:35:34.872599] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:24.717 [2024-07-20 18:35:34.872678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255860 ] 00:05:24.717 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.974 [2024-07-20 18:35:35.227675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.974 [2024-07-20 18:35:35.292014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.538 18:35:35 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:25.538 18:35:35 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:25.538 18:35:35 json_config -- json_config/common.sh@26 -- # echo '' 00:05:25.538 00:05:25.538 18:35:35 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:25.538 18:35:35 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:25.538 18:35:35 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:25.538 18:35:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.538 18:35:35 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:25.538 18:35:35 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:25.538 18:35:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.538 18:35:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.538 18:35:35 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:25.538 18:35:35 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:25.538 18:35:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:28.818 18:35:38 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:28.818 18:35:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:28.818 18:35:38 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:28.818 18:35:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.818 18:35:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:28.818 18:35:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:28.818 18:35:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:28.818 18:35:38 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:28.818 18:35:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:28.818 18:35:38 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:29.077 18:35:39 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:29.077 18:35:39 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:29.077 18:35:39 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:29.077 18:35:39 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:29.077 18:35:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.077 18:35:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.077 18:35:39 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:29.077 18:35:39 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:29.077 18:35:39 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:29.077 18:35:39 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:29.077 18:35:39 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:29.077 18:35:39 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:29.077 18:35:39 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:29.077 18:35:39 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:29.077 18:35:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.077 18:35:39 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:29.077 18:35:39 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:29.077 18:35:39 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:29.077 18:35:39 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.077 18:35:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.336 MallocForNvmf0 00:05:29.336 18:35:39 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.336 18:35:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.594 MallocForNvmf1 00:05:29.594 18:35:39 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:29.594 18:35:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:29.853 [2024-07-20 18:35:40.005947] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.853 18:35:40 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.853 18:35:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:30.110 18:35:40 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:30.110 18:35:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:30.367 18:35:40 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.367 18:35:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.625 18:35:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:30.625 18:35:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:30.883 [2024-07-20 18:35:40.981195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:30.883 18:35:41 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:30.883 18:35:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.883 18:35:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.883 18:35:41 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:30.883 18:35:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.883 18:35:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.883 18:35:41 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:30.883 18:35:41 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.883 18:35:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:31.141 MallocBdevForConfigChangeCheck 00:05:31.141 18:35:41 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:31.141 18:35:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.141 18:35:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.141 18:35:41 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:31.141 18:35:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:31.398 18:35:41 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:31.398 INFO: shutting down applications... 00:05:31.398 18:35:41 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:31.398 18:35:41 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:31.398 18:35:41 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:31.398 18:35:41 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:33.294 Calling clear_iscsi_subsystem 00:05:33.294 Calling clear_nvmf_subsystem 00:05:33.294 Calling clear_nbd_subsystem 00:05:33.294 Calling clear_ublk_subsystem 00:05:33.294 Calling clear_vhost_blk_subsystem 00:05:33.294 Calling clear_vhost_scsi_subsystem 00:05:33.294 Calling clear_bdev_subsystem 00:05:33.294 18:35:43 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:33.294 18:35:43 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:33.294 18:35:43 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:33.294 18:35:43 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.294 18:35:43 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:33.294 18:35:43 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:33.552 18:35:43 json_config -- json_config/json_config.sh@345 -- # break 00:05:33.552 18:35:43 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:33.552 18:35:43 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:33.552 18:35:43 json_config -- json_config/common.sh@31 -- # local app=target 00:05:33.552 18:35:43 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:33.552 18:35:43 json_config -- json_config/common.sh@35 -- # [[ -n 1255860 ]] 00:05:33.552 18:35:43 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1255860 00:05:33.552 18:35:43 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:33.552 18:35:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.552 18:35:43 json_config -- json_config/common.sh@41 -- # kill -0 1255860 00:05:33.552 18:35:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.118 18:35:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.118 18:35:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.118 18:35:44 json_config -- json_config/common.sh@41 -- # kill -0 1255860 00:05:34.118 18:35:44 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:34.118 18:35:44 json_config -- json_config/common.sh@43 -- # break 00:05:34.118 18:35:44 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:34.118 18:35:44 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:34.118 SPDK target shutdown done 00:05:34.118 18:35:44 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:34.118 INFO: relaunching applications... 00:05:34.118 18:35:44 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.118 18:35:44 json_config -- json_config/common.sh@9 -- # local app=target 00:05:34.118 18:35:44 json_config -- json_config/common.sh@10 -- # shift 00:05:34.118 18:35:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.118 18:35:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.118 18:35:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.118 18:35:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.118 18:35:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.118 18:35:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1257066 00:05:34.118 18:35:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.118 18:35:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.118 Waiting for target to run... 00:05:34.118 18:35:44 json_config -- json_config/common.sh@25 -- # waitforlisten 1257066 /var/tmp/spdk_tgt.sock 00:05:34.118 18:35:44 json_config -- common/autotest_common.sh@827 -- # '[' -z 1257066 ']' 00:05:34.118 18:35:44 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.118 18:35:44 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:34.118 18:35:44 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.118 18:35:44 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:34.118 18:35:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.118 [2024-07-20 18:35:44.231183] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:34.118 [2024-07-20 18:35:44.231282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257066 ] 00:05:34.118 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.692 [2024-07-20 18:35:44.765521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.692 [2024-07-20 18:35:44.847653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.002 [2024-07-20 18:35:47.886733] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.002 [2024-07-20 18:35:47.919278] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.567 18:35:48 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:38.567 18:35:48 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:38.567 18:35:48 json_config -- json_config/common.sh@26 -- # echo '' 00:05:38.567 00:05:38.567 18:35:48 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:38.567 18:35:48 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:38.567 INFO: Checking if target configuration is the same... 00:05:38.568 18:35:48 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.568 18:35:48 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:38.568 18:35:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.568 + '[' 2 -ne 2 ']' 00:05:38.568 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.568 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:38.568 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:38.568 +++ basename /dev/fd/62 00:05:38.568 ++ mktemp /tmp/62.XXX 00:05:38.568 + tmp_file_1=/tmp/62.vvf 00:05:38.568 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.568 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.568 + tmp_file_2=/tmp/spdk_tgt_config.json.uc7 00:05:38.568 + ret=0 00:05:38.568 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.826 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.826 + diff -u /tmp/62.vvf /tmp/spdk_tgt_config.json.uc7 00:05:38.826 + echo 'INFO: JSON config files are the same' 00:05:38.826 INFO: JSON config files are the same 00:05:38.826 + rm /tmp/62.vvf /tmp/spdk_tgt_config.json.uc7 00:05:38.826 + exit 0 00:05:38.826 18:35:49 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:38.826 18:35:49 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:38.826 INFO: changing configuration and checking if this can be detected... 00:05:38.826 18:35:49 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.826 18:35:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.084 18:35:49 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.084 18:35:49 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:39.084 18:35:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.084 + '[' 2 -ne 2 ']' 00:05:39.084 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:39.084 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:39.084 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:39.084 +++ basename /dev/fd/62 00:05:39.084 ++ mktemp /tmp/62.XXX 00:05:39.084 + tmp_file_1=/tmp/62.nKi 00:05:39.084 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.084 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:39.084 + tmp_file_2=/tmp/spdk_tgt_config.json.WZf 00:05:39.084 + ret=0 00:05:39.084 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.650 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.650 + diff -u /tmp/62.nKi /tmp/spdk_tgt_config.json.WZf 00:05:39.650 + ret=1 00:05:39.650 + echo '=== Start of file: /tmp/62.nKi ===' 00:05:39.650 + cat /tmp/62.nKi 00:05:39.650 + echo '=== End of file: /tmp/62.nKi ===' 00:05:39.650 + echo '' 00:05:39.650 + echo '=== Start of file: /tmp/spdk_tgt_config.json.WZf ===' 00:05:39.650 + cat /tmp/spdk_tgt_config.json.WZf 00:05:39.650 + echo '=== End of file: /tmp/spdk_tgt_config.json.WZf ===' 00:05:39.650 + echo '' 00:05:39.650 + rm /tmp/62.nKi /tmp/spdk_tgt_config.json.WZf 00:05:39.650 + exit 1 00:05:39.650 18:35:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:39.650 INFO: configuration change detected. 00:05:39.650 18:35:49 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:39.650 18:35:49 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:39.650 18:35:49 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:39.650 18:35:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.650 18:35:49 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:39.650 18:35:49 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:39.650 18:35:49 json_config -- json_config/json_config.sh@317 -- # [[ -n 1257066 ]] 00:05:39.650 18:35:49 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:39.650 18:35:49 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:39.650 18:35:49 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:39.650 18:35:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.650 18:35:49 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:39.651 18:35:49 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:39.651 18:35:49 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:39.651 18:35:49 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:39.651 18:35:49 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:39.651 18:35:49 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:39.651 18:35:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.651 18:35:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.651 18:35:49 json_config -- json_config/json_config.sh@323 -- # killprocess 1257066 00:05:39.651 18:35:49 json_config -- common/autotest_common.sh@946 -- # '[' -z 1257066 ']' 00:05:39.651 18:35:49 json_config -- common/autotest_common.sh@950 -- # kill -0 1257066 00:05:39.651 18:35:49 json_config -- common/autotest_common.sh@951 -- # uname 00:05:39.651 18:35:49 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:39.651 18:35:49 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1257066 00:05:39.651 18:35:49 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:39.651 18:35:49 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:39.651 18:35:49 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1257066' 00:05:39.651 killing process with pid 1257066 00:05:39.651 18:35:49 json_config -- common/autotest_common.sh@965 -- # kill 1257066 00:05:39.651 18:35:49 json_config -- common/autotest_common.sh@970 -- # wait 1257066 00:05:41.551 18:35:51 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.551 18:35:51 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:41.551 18:35:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.551 18:35:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.551 18:35:51 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:41.551 18:35:51 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:41.551 INFO: Success 00:05:41.551 00:05:41.551 real 0m16.644s 00:05:41.551 user 0m18.501s 00:05:41.551 sys 0m2.063s 00:05:41.551 18:35:51 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.551 18:35:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.551 ************************************ 00:05:41.551 END TEST json_config 00:05:41.551 ************************************ 00:05:41.551 18:35:51 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.551 18:35:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.551 18:35:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.551 18:35:51 -- common/autotest_common.sh@10 -- # set +x 00:05:41.551 ************************************ 00:05:41.551 START TEST json_config_extra_key 00:05:41.551 ************************************ 00:05:41.551 18:35:51 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.551 18:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.551 18:35:51 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.551 18:35:51 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.551 18:35:51 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.551 18:35:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.551 18:35:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.551 18:35:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.551 18:35:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:41.551 18:35:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:41.551 18:35:51 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:41.551 18:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:41.551 18:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:41.551 18:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:41.551 18:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.551 18:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:41.551 18:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.551 18:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:41.551 18:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:41.551 18:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:41.551 18:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.551 18:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:41.551 INFO: launching applications... 00:05:41.551 18:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.551 18:35:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:41.551 18:35:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:41.551 18:35:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.551 18:35:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.551 18:35:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.551 18:35:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.551 18:35:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.551 18:35:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1258088 00:05:41.551 18:35:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.551 Waiting for target to run... 00:05:41.551 18:35:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1258088 /var/tmp/spdk_tgt.sock 00:05:41.551 18:35:51 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 1258088 ']' 00:05:41.552 18:35:51 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.552 18:35:51 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.552 18:35:51 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:41.552 18:35:51 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.552 18:35:51 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:41.552 18:35:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.552 [2024-07-20 18:35:51.564497] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:41.552 [2024-07-20 18:35:51.564574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258088 ] 00:05:41.552 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.810 [2024-07-20 18:35:51.895697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.810 [2024-07-20 18:35:51.958919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.377 18:35:52 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:42.377 18:35:52 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:42.377 18:35:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:42.377 00:05:42.377 18:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:42.377 INFO: shutting down applications... 00:05:42.377 18:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:42.377 18:35:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:42.377 18:35:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:42.377 18:35:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1258088 ]] 00:05:42.377 18:35:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1258088 00:05:42.377 18:35:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:42.377 18:35:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.377 18:35:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1258088 00:05:42.377 18:35:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:42.968 18:35:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:42.968 18:35:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.968 18:35:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1258088 00:05:42.968 18:35:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:42.968 18:35:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:42.968 18:35:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:42.968 18:35:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:42.968 SPDK target shutdown done 00:05:42.968 18:35:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:42.968 Success 00:05:42.968 00:05:42.968 real 0m1.546s 00:05:42.968 user 0m1.518s 00:05:42.968 sys 0m0.421s 00:05:42.968 18:35:53 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.968 18:35:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:42.968 ************************************ 00:05:42.968 END TEST json_config_extra_key 00:05:42.968 ************************************ 00:05:42.968 18:35:53 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:42.968 18:35:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.968 18:35:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.968 18:35:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.968 ************************************ 00:05:42.968 START TEST alias_rpc 00:05:42.968 ************************************ 00:05:42.968 18:35:53 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:42.969 * Looking for test storage... 00:05:42.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:42.969 18:35:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:42.969 18:35:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1258282 00:05:42.969 18:35:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.969 18:35:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1258282 00:05:42.969 18:35:53 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 1258282 ']' 00:05:42.969 18:35:53 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.969 18:35:53 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:42.969 18:35:53 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.969 18:35:53 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:42.969 18:35:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.969 [2024-07-20 18:35:53.156965] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:42.969 [2024-07-20 18:35:53.157058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258282 ] 00:05:42.969 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.969 [2024-07-20 18:35:53.214188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.226 [2024-07-20 18:35:53.300896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.483 18:35:53 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:43.483 18:35:53 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:43.483 18:35:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:43.741 18:35:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1258282 00:05:43.741 18:35:53 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 1258282 ']' 00:05:43.741 18:35:53 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 1258282 00:05:43.741 18:35:53 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:43.741 18:35:53 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:43.741 18:35:53 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1258282 00:05:43.741 18:35:53 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:43.741 18:35:53 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:43.741 18:35:53 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1258282' 00:05:43.741 killing process with pid 1258282 00:05:43.741 18:35:53 alias_rpc -- common/autotest_common.sh@965 -- # kill 1258282 00:05:43.741 18:35:53 alias_rpc -- common/autotest_common.sh@970 -- # wait 1258282 00:05:43.999 00:05:43.999 real 0m1.213s 00:05:43.999 user 0m1.289s 00:05:43.999 sys 0m0.427s 00:05:43.999 18:35:54 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.999 18:35:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.999 ************************************ 00:05:43.999 END TEST alias_rpc 00:05:43.999 ************************************ 00:05:43.999 18:35:54 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:43.999 18:35:54 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:43.999 18:35:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.999 18:35:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.999 18:35:54 -- common/autotest_common.sh@10 -- # set +x 00:05:43.999 ************************************ 00:05:43.999 START TEST spdkcli_tcp 00:05:43.999 ************************************ 00:05:43.999 18:35:54 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:44.256 * Looking for test storage... 00:05:44.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:44.256 18:35:54 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:44.256 18:35:54 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:44.256 18:35:54 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:44.256 18:35:54 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:44.256 18:35:54 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:44.256 18:35:54 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:44.256 18:35:54 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:44.256 18:35:54 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:44.256 18:35:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.256 18:35:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1258483 00:05:44.256 18:35:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:44.256 18:35:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1258483 00:05:44.256 18:35:54 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 1258483 ']' 00:05:44.256 18:35:54 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.256 18:35:54 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:44.256 18:35:54 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.256 18:35:54 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:44.256 18:35:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.256 [2024-07-20 18:35:54.420392] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:44.256 [2024-07-20 18:35:54.420495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258483 ] 00:05:44.256 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.256 [2024-07-20 18:35:54.483129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.257 [2024-07-20 18:35:54.568313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.257 [2024-07-20 18:35:54.568316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.515 18:35:54 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:44.515 18:35:54 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:44.515 18:35:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1258599 00:05:44.515 18:35:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:44.515 18:35:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:44.773 [ 00:05:44.773 "bdev_malloc_delete", 00:05:44.773 "bdev_malloc_create", 00:05:44.773 "bdev_null_resize", 00:05:44.773 "bdev_null_delete", 00:05:44.773 "bdev_null_create", 00:05:44.773 "bdev_nvme_cuse_unregister", 00:05:44.773 "bdev_nvme_cuse_register", 00:05:44.773 "bdev_opal_new_user", 00:05:44.773 "bdev_opal_set_lock_state", 00:05:44.773 "bdev_opal_delete", 00:05:44.773 "bdev_opal_get_info", 00:05:44.773 "bdev_opal_create", 00:05:44.773 "bdev_nvme_opal_revert", 00:05:44.773 "bdev_nvme_opal_init", 00:05:44.773 "bdev_nvme_send_cmd", 00:05:44.773 "bdev_nvme_get_path_iostat", 00:05:44.773 "bdev_nvme_get_mdns_discovery_info", 00:05:44.773 "bdev_nvme_stop_mdns_discovery", 00:05:44.773 "bdev_nvme_start_mdns_discovery", 00:05:44.773 "bdev_nvme_set_multipath_policy", 00:05:44.773 "bdev_nvme_set_preferred_path", 00:05:44.773 "bdev_nvme_get_io_paths", 00:05:44.773 "bdev_nvme_remove_error_injection", 00:05:44.773 "bdev_nvme_add_error_injection", 00:05:44.773 "bdev_nvme_get_discovery_info", 00:05:44.773 "bdev_nvme_stop_discovery", 00:05:44.773 "bdev_nvme_start_discovery", 00:05:44.773 "bdev_nvme_get_controller_health_info", 00:05:44.773 "bdev_nvme_disable_controller", 00:05:44.773 "bdev_nvme_enable_controller", 00:05:44.773 "bdev_nvme_reset_controller", 00:05:44.773 "bdev_nvme_get_transport_statistics", 00:05:44.773 "bdev_nvme_apply_firmware", 00:05:44.773 "bdev_nvme_detach_controller", 00:05:44.773 "bdev_nvme_get_controllers", 00:05:44.773 "bdev_nvme_attach_controller", 00:05:44.773 "bdev_nvme_set_hotplug", 00:05:44.773 "bdev_nvme_set_options", 00:05:44.773 "bdev_passthru_delete", 00:05:44.773 "bdev_passthru_create", 00:05:44.773 "bdev_lvol_set_parent_bdev", 00:05:44.773 "bdev_lvol_set_parent", 00:05:44.773 "bdev_lvol_check_shallow_copy", 00:05:44.773 "bdev_lvol_start_shallow_copy", 00:05:44.773 "bdev_lvol_grow_lvstore", 00:05:44.773 "bdev_lvol_get_lvols", 00:05:44.773 "bdev_lvol_get_lvstores", 00:05:44.773 "bdev_lvol_delete", 00:05:44.773 "bdev_lvol_set_read_only", 00:05:44.773 "bdev_lvol_resize", 00:05:44.773 "bdev_lvol_decouple_parent", 00:05:44.773 "bdev_lvol_inflate", 00:05:44.773 "bdev_lvol_rename", 00:05:44.773 "bdev_lvol_clone_bdev", 00:05:44.773 "bdev_lvol_clone", 00:05:44.773 "bdev_lvol_snapshot", 00:05:44.773 "bdev_lvol_create", 00:05:44.773 "bdev_lvol_delete_lvstore", 00:05:44.773 "bdev_lvol_rename_lvstore", 00:05:44.773 "bdev_lvol_create_lvstore", 00:05:44.773 "bdev_raid_set_options", 00:05:44.773 "bdev_raid_remove_base_bdev", 00:05:44.773 "bdev_raid_add_base_bdev", 00:05:44.773 "bdev_raid_delete", 00:05:44.773 "bdev_raid_create", 00:05:44.773 "bdev_raid_get_bdevs", 00:05:44.773 "bdev_error_inject_error", 00:05:44.773 "bdev_error_delete", 00:05:44.773 "bdev_error_create", 00:05:44.773 "bdev_split_delete", 00:05:44.773 "bdev_split_create", 00:05:44.773 "bdev_delay_delete", 00:05:44.773 "bdev_delay_create", 00:05:44.773 "bdev_delay_update_latency", 00:05:44.773 "bdev_zone_block_delete", 00:05:44.773 "bdev_zone_block_create", 00:05:44.773 "blobfs_create", 00:05:44.773 "blobfs_detect", 00:05:44.773 "blobfs_set_cache_size", 00:05:44.773 "bdev_aio_delete", 00:05:44.773 "bdev_aio_rescan", 00:05:44.773 "bdev_aio_create", 00:05:44.773 "bdev_ftl_set_property", 00:05:44.773 "bdev_ftl_get_properties", 00:05:44.773 "bdev_ftl_get_stats", 00:05:44.773 "bdev_ftl_unmap", 00:05:44.773 "bdev_ftl_unload", 00:05:44.773 "bdev_ftl_delete", 00:05:44.773 "bdev_ftl_load", 00:05:44.773 "bdev_ftl_create", 00:05:44.773 "bdev_virtio_attach_controller", 00:05:44.773 "bdev_virtio_scsi_get_devices", 00:05:44.773 "bdev_virtio_detach_controller", 00:05:44.773 "bdev_virtio_blk_set_hotplug", 00:05:44.773 "bdev_iscsi_delete", 00:05:44.773 "bdev_iscsi_create", 00:05:44.773 "bdev_iscsi_set_options", 00:05:44.773 "accel_error_inject_error", 00:05:44.773 "ioat_scan_accel_module", 00:05:44.773 "dsa_scan_accel_module", 00:05:44.773 "iaa_scan_accel_module", 00:05:44.773 "vfu_virtio_create_scsi_endpoint", 00:05:44.773 "vfu_virtio_scsi_remove_target", 00:05:44.773 "vfu_virtio_scsi_add_target", 00:05:44.773 "vfu_virtio_create_blk_endpoint", 00:05:44.773 "vfu_virtio_delete_endpoint", 00:05:44.773 "keyring_file_remove_key", 00:05:44.773 "keyring_file_add_key", 00:05:44.773 "keyring_linux_set_options", 00:05:44.773 "iscsi_get_histogram", 00:05:44.773 "iscsi_enable_histogram", 00:05:44.773 "iscsi_set_options", 00:05:44.773 "iscsi_get_auth_groups", 00:05:44.773 "iscsi_auth_group_remove_secret", 00:05:44.773 "iscsi_auth_group_add_secret", 00:05:44.773 "iscsi_delete_auth_group", 00:05:44.773 "iscsi_create_auth_group", 00:05:44.773 "iscsi_set_discovery_auth", 00:05:44.773 "iscsi_get_options", 00:05:44.773 "iscsi_target_node_request_logout", 00:05:44.773 "iscsi_target_node_set_redirect", 00:05:44.773 "iscsi_target_node_set_auth", 00:05:44.773 "iscsi_target_node_add_lun", 00:05:44.773 "iscsi_get_stats", 00:05:44.773 "iscsi_get_connections", 00:05:44.773 "iscsi_portal_group_set_auth", 00:05:44.773 "iscsi_start_portal_group", 00:05:44.773 "iscsi_delete_portal_group", 00:05:44.773 "iscsi_create_portal_group", 00:05:44.773 "iscsi_get_portal_groups", 00:05:44.773 "iscsi_delete_target_node", 00:05:44.773 "iscsi_target_node_remove_pg_ig_maps", 00:05:44.773 "iscsi_target_node_add_pg_ig_maps", 00:05:44.773 "iscsi_create_target_node", 00:05:44.773 "iscsi_get_target_nodes", 00:05:44.773 "iscsi_delete_initiator_group", 00:05:44.773 "iscsi_initiator_group_remove_initiators", 00:05:44.773 "iscsi_initiator_group_add_initiators", 00:05:44.773 "iscsi_create_initiator_group", 00:05:44.773 "iscsi_get_initiator_groups", 00:05:44.773 "nvmf_set_crdt", 00:05:44.773 "nvmf_set_config", 00:05:44.773 "nvmf_set_max_subsystems", 00:05:44.773 "nvmf_stop_mdns_prr", 00:05:44.773 "nvmf_publish_mdns_prr", 00:05:44.773 "nvmf_subsystem_get_listeners", 00:05:44.773 "nvmf_subsystem_get_qpairs", 00:05:44.773 "nvmf_subsystem_get_controllers", 00:05:44.773 "nvmf_get_stats", 00:05:44.773 "nvmf_get_transports", 00:05:44.773 "nvmf_create_transport", 00:05:44.773 "nvmf_get_targets", 00:05:44.773 "nvmf_delete_target", 00:05:44.773 "nvmf_create_target", 00:05:44.773 "nvmf_subsystem_allow_any_host", 00:05:44.773 "nvmf_subsystem_remove_host", 00:05:44.773 "nvmf_subsystem_add_host", 00:05:44.773 "nvmf_ns_remove_host", 00:05:44.773 "nvmf_ns_add_host", 00:05:44.773 "nvmf_subsystem_remove_ns", 00:05:44.773 "nvmf_subsystem_add_ns", 00:05:44.773 "nvmf_subsystem_listener_set_ana_state", 00:05:44.773 "nvmf_discovery_get_referrals", 00:05:44.773 "nvmf_discovery_remove_referral", 00:05:44.773 "nvmf_discovery_add_referral", 00:05:44.773 "nvmf_subsystem_remove_listener", 00:05:44.773 "nvmf_subsystem_add_listener", 00:05:44.773 "nvmf_delete_subsystem", 00:05:44.773 "nvmf_create_subsystem", 00:05:44.773 "nvmf_get_subsystems", 00:05:44.773 "env_dpdk_get_mem_stats", 00:05:44.773 "nbd_get_disks", 00:05:44.773 "nbd_stop_disk", 00:05:44.773 "nbd_start_disk", 00:05:44.773 "ublk_recover_disk", 00:05:44.773 "ublk_get_disks", 00:05:44.773 "ublk_stop_disk", 00:05:44.773 "ublk_start_disk", 00:05:44.773 "ublk_destroy_target", 00:05:44.773 "ublk_create_target", 00:05:44.773 "virtio_blk_create_transport", 00:05:44.773 "virtio_blk_get_transports", 00:05:44.773 "vhost_controller_set_coalescing", 00:05:44.773 "vhost_get_controllers", 00:05:44.773 "vhost_delete_controller", 00:05:44.773 "vhost_create_blk_controller", 00:05:44.773 "vhost_scsi_controller_remove_target", 00:05:44.773 "vhost_scsi_controller_add_target", 00:05:44.773 "vhost_start_scsi_controller", 00:05:44.773 "vhost_create_scsi_controller", 00:05:44.773 "thread_set_cpumask", 00:05:44.773 "framework_get_scheduler", 00:05:44.773 "framework_set_scheduler", 00:05:44.773 "framework_get_reactors", 00:05:44.773 "thread_get_io_channels", 00:05:44.773 "thread_get_pollers", 00:05:44.773 "thread_get_stats", 00:05:44.773 "framework_monitor_context_switch", 00:05:44.773 "spdk_kill_instance", 00:05:44.773 "log_enable_timestamps", 00:05:44.773 "log_get_flags", 00:05:44.773 "log_clear_flag", 00:05:44.773 "log_set_flag", 00:05:44.773 "log_get_level", 00:05:44.773 "log_set_level", 00:05:44.773 "log_get_print_level", 00:05:44.773 "log_set_print_level", 00:05:44.773 "framework_enable_cpumask_locks", 00:05:44.773 "framework_disable_cpumask_locks", 00:05:44.773 "framework_wait_init", 00:05:44.773 "framework_start_init", 00:05:44.773 "scsi_get_devices", 00:05:44.773 "bdev_get_histogram", 00:05:44.773 "bdev_enable_histogram", 00:05:44.773 "bdev_set_qos_limit", 00:05:44.773 "bdev_set_qd_sampling_period", 00:05:44.773 "bdev_get_bdevs", 00:05:44.773 "bdev_reset_iostat", 00:05:44.773 "bdev_get_iostat", 00:05:44.773 "bdev_examine", 00:05:44.773 "bdev_wait_for_examine", 00:05:44.773 "bdev_set_options", 00:05:44.773 "notify_get_notifications", 00:05:44.773 "notify_get_types", 00:05:44.773 "accel_get_stats", 00:05:44.773 "accel_set_options", 00:05:44.773 "accel_set_driver", 00:05:44.773 "accel_crypto_key_destroy", 00:05:44.773 "accel_crypto_keys_get", 00:05:44.773 "accel_crypto_key_create", 00:05:44.773 "accel_assign_opc", 00:05:44.773 "accel_get_module_info", 00:05:44.773 "accel_get_opc_assignments", 00:05:44.773 "vmd_rescan", 00:05:44.773 "vmd_remove_device", 00:05:44.773 "vmd_enable", 00:05:44.773 "sock_get_default_impl", 00:05:44.773 "sock_set_default_impl", 00:05:44.773 "sock_impl_set_options", 00:05:44.773 "sock_impl_get_options", 00:05:44.773 "iobuf_get_stats", 00:05:44.773 "iobuf_set_options", 00:05:44.773 "keyring_get_keys", 00:05:44.773 "framework_get_pci_devices", 00:05:44.773 "framework_get_config", 00:05:44.773 "framework_get_subsystems", 00:05:44.773 "vfu_tgt_set_base_path", 00:05:44.773 "trace_get_info", 00:05:44.773 "trace_get_tpoint_group_mask", 00:05:44.773 "trace_disable_tpoint_group", 00:05:44.773 "trace_enable_tpoint_group", 00:05:44.773 "trace_clear_tpoint_mask", 00:05:44.773 "trace_set_tpoint_mask", 00:05:44.773 "spdk_get_version", 00:05:44.773 "rpc_get_methods" 00:05:44.773 ] 00:05:44.773 18:35:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:44.773 18:35:55 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.773 18:35:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.773 18:35:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:44.773 18:35:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1258483 00:05:45.031 18:35:55 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 1258483 ']' 00:05:45.031 18:35:55 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 1258483 00:05:45.031 18:35:55 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:45.031 18:35:55 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:45.031 18:35:55 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1258483 00:05:45.031 18:35:55 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:45.031 18:35:55 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:45.031 18:35:55 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1258483' 00:05:45.031 killing process with pid 1258483 00:05:45.031 18:35:55 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 1258483 00:05:45.031 18:35:55 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 1258483 00:05:45.288 00:05:45.288 real 0m1.222s 00:05:45.288 user 0m2.197s 00:05:45.288 sys 0m0.431s 00:05:45.288 18:35:55 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.288 18:35:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.288 ************************************ 00:05:45.288 END TEST spdkcli_tcp 00:05:45.288 ************************************ 00:05:45.288 18:35:55 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.288 18:35:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.288 18:35:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.288 18:35:55 -- common/autotest_common.sh@10 -- # set +x 00:05:45.288 ************************************ 00:05:45.288 START TEST dpdk_mem_utility 00:05:45.288 ************************************ 00:05:45.288 18:35:55 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.545 * Looking for test storage... 00:05:45.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:45.546 18:35:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:45.546 18:35:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1258786 00:05:45.546 18:35:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.546 18:35:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1258786 00:05:45.546 18:35:55 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 1258786 ']' 00:05:45.546 18:35:55 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.546 18:35:55 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:45.546 18:35:55 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.546 18:35:55 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:45.546 18:35:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.546 [2024-07-20 18:35:55.687164] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:45.546 [2024-07-20 18:35:55.687245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258786 ] 00:05:45.546 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.546 [2024-07-20 18:35:55.744169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.546 [2024-07-20 18:35:55.829507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.803 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:45.803 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:45.803 18:35:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:45.803 18:35:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:45.803 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.803 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.803 { 00:05:45.803 "filename": "/tmp/spdk_mem_dump.txt" 00:05:45.803 } 00:05:45.803 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.803 18:35:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:46.060 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:46.061 1 heaps totaling size 814.000000 MiB 00:05:46.061 size: 814.000000 MiB heap id: 0 00:05:46.061 end heaps---------- 00:05:46.061 8 mempools totaling size 598.116089 MiB 00:05:46.061 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:46.061 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:46.061 size: 84.521057 MiB name: bdev_io_1258786 00:05:46.061 size: 51.011292 MiB name: evtpool_1258786 00:05:46.061 size: 50.003479 MiB name: msgpool_1258786 00:05:46.061 size: 21.763794 MiB name: PDU_Pool 00:05:46.061 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:46.061 size: 0.026123 MiB name: Session_Pool 00:05:46.061 end mempools------- 00:05:46.061 6 memzones totaling size 4.142822 MiB 00:05:46.061 size: 1.000366 MiB name: RG_ring_0_1258786 00:05:46.061 size: 1.000366 MiB name: RG_ring_1_1258786 00:05:46.061 size: 1.000366 MiB name: RG_ring_4_1258786 00:05:46.061 size: 1.000366 MiB name: RG_ring_5_1258786 00:05:46.061 size: 0.125366 MiB name: RG_ring_2_1258786 00:05:46.061 size: 0.015991 MiB name: RG_ring_3_1258786 00:05:46.061 end memzones------- 00:05:46.061 18:35:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:46.061 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:46.061 list of free elements. size: 12.519348 MiB 00:05:46.061 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:46.061 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:46.061 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:46.061 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:46.061 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:46.061 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:46.061 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:46.061 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:46.061 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:46.061 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:46.061 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:46.061 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:46.061 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:46.061 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:46.061 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:46.061 list of standard malloc elements. size: 199.218079 MiB 00:05:46.061 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:46.061 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:46.061 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:46.061 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:46.061 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:46.061 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:46.061 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:46.061 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:46.061 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:46.061 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:46.061 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:46.061 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:46.061 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:46.061 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:46.061 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:46.061 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:46.061 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:46.061 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:46.061 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:46.061 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:46.061 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:46.061 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:46.061 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:46.061 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:46.061 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:46.061 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:46.061 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:46.061 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:46.061 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:46.061 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:46.061 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:46.061 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:46.061 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:46.061 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:46.061 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:46.061 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:46.061 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:46.061 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:46.061 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:46.061 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:46.061 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:46.061 list of memzone associated elements. size: 602.262573 MiB 00:05:46.061 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:46.061 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:46.061 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:46.061 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:46.061 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:46.061 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1258786_0 00:05:46.061 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:46.061 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1258786_0 00:05:46.061 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:46.061 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1258786_0 00:05:46.061 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:46.061 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:46.061 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:46.061 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:46.061 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:46.061 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1258786 00:05:46.061 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:46.061 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1258786 00:05:46.061 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:46.061 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1258786 00:05:46.061 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:46.061 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:46.061 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:46.061 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:46.061 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:46.061 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:46.061 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:46.061 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:46.061 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:46.061 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1258786 00:05:46.061 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:46.061 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1258786 00:05:46.061 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:46.061 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1258786 00:05:46.061 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:46.061 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1258786 00:05:46.061 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:46.061 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1258786 00:05:46.061 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:46.061 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:46.061 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:46.061 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:46.061 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:46.061 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:46.061 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:46.061 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1258786 00:05:46.061 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:46.061 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:46.061 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:46.061 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:46.061 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:46.061 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1258786 00:05:46.061 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:46.061 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:46.061 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:46.061 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1258786 00:05:46.061 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:46.061 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1258786 00:05:46.061 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:46.061 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:46.061 18:35:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:46.061 18:35:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1258786 00:05:46.061 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 1258786 ']' 00:05:46.061 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 1258786 00:05:46.061 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:46.061 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:46.061 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1258786 00:05:46.061 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:46.061 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:46.061 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1258786' 00:05:46.061 killing process with pid 1258786 00:05:46.061 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 1258786 00:05:46.061 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 1258786 00:05:46.319 00:05:46.319 real 0m1.047s 00:05:46.319 user 0m1.020s 00:05:46.319 sys 0m0.388s 00:05:46.319 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.319 18:35:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.319 ************************************ 00:05:46.319 END TEST dpdk_mem_utility 00:05:46.319 ************************************ 00:05:46.578 18:35:56 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:46.578 18:35:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:46.578 18:35:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.578 18:35:56 -- common/autotest_common.sh@10 -- # set +x 00:05:46.578 ************************************ 00:05:46.578 START TEST event 00:05:46.578 ************************************ 00:05:46.578 18:35:56 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:46.578 * Looking for test storage... 00:05:46.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:46.578 18:35:56 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:46.578 18:35:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:46.578 18:35:56 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.578 18:35:56 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:46.578 18:35:56 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.578 18:35:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.578 ************************************ 00:05:46.578 START TEST event_perf 00:05:46.578 ************************************ 00:05:46.578 18:35:56 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.578 Running I/O for 1 seconds...[2024-07-20 18:35:56.772330] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:46.578 [2024-07-20 18:35:56.772395] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258975 ] 00:05:46.578 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.578 [2024-07-20 18:35:56.836026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.836 [2024-07-20 18:35:56.929641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.836 [2024-07-20 18:35:56.929711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.836 [2024-07-20 18:35:56.929812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.836 [2024-07-20 18:35:56.929816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.769 Running I/O for 1 seconds... 00:05:47.769 lcore 0: 237953 00:05:47.769 lcore 1: 237953 00:05:47.769 lcore 2: 237952 00:05:47.769 lcore 3: 237952 00:05:47.769 done. 00:05:47.769 00:05:47.769 real 0m1.255s 00:05:47.769 user 0m4.167s 00:05:47.769 sys 0m0.079s 00:05:47.769 18:35:58 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.769 18:35:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.769 ************************************ 00:05:47.769 END TEST event_perf 00:05:47.769 ************************************ 00:05:47.769 18:35:58 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:47.769 18:35:58 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:47.769 18:35:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.769 18:35:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.769 ************************************ 00:05:47.769 START TEST event_reactor 00:05:47.769 ************************************ 00:05:47.769 18:35:58 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:47.769 [2024-07-20 18:35:58.077296] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:47.769 [2024-07-20 18:35:58.077365] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259135 ] 00:05:48.027 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.027 [2024-07-20 18:35:58.142025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.027 [2024-07-20 18:35:58.234212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.410 test_start 00:05:49.410 oneshot 00:05:49.410 tick 100 00:05:49.410 tick 100 00:05:49.410 tick 250 00:05:49.410 tick 100 00:05:49.410 tick 100 00:05:49.410 tick 100 00:05:49.410 tick 250 00:05:49.410 tick 500 00:05:49.410 tick 100 00:05:49.410 tick 100 00:05:49.410 tick 250 00:05:49.410 tick 100 00:05:49.410 tick 100 00:05:49.410 test_end 00:05:49.410 00:05:49.410 real 0m1.251s 00:05:49.410 user 0m1.159s 00:05:49.410 sys 0m0.087s 00:05:49.410 18:35:59 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.410 18:35:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:49.410 ************************************ 00:05:49.410 END TEST event_reactor 00:05:49.410 ************************************ 00:05:49.410 18:35:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.410 18:35:59 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:49.410 18:35:59 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.410 18:35:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.410 ************************************ 00:05:49.410 START TEST event_reactor_perf 00:05:49.410 ************************************ 00:05:49.410 18:35:59 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.410 [2024-07-20 18:35:59.370712] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:49.410 [2024-07-20 18:35:59.370773] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259298 ] 00:05:49.410 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.410 [2024-07-20 18:35:59.432679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.410 [2024-07-20 18:35:59.525008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.345 test_start 00:05:50.345 test_end 00:05:50.345 Performance: 357758 events per second 00:05:50.345 00:05:50.345 real 0m1.247s 00:05:50.345 user 0m1.162s 00:05:50.345 sys 0m0.080s 00:05:50.345 18:36:00 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.345 18:36:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.345 ************************************ 00:05:50.346 END TEST event_reactor_perf 00:05:50.346 ************************************ 00:05:50.346 18:36:00 event -- event/event.sh@49 -- # uname -s 00:05:50.346 18:36:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:50.346 18:36:00 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:50.346 18:36:00 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.346 18:36:00 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.346 18:36:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.346 ************************************ 00:05:50.346 START TEST event_scheduler 00:05:50.346 ************************************ 00:05:50.346 18:36:00 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:50.604 * Looking for test storage... 00:05:50.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:50.604 18:36:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:50.604 18:36:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1259528 00:05:50.604 18:36:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:50.604 18:36:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.604 18:36:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1259528 00:05:50.604 18:36:00 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 1259528 ']' 00:05:50.604 18:36:00 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.604 18:36:00 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:50.604 18:36:00 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.604 18:36:00 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:50.604 18:36:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.604 [2024-07-20 18:36:00.751593] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:50.604 [2024-07-20 18:36:00.751684] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259528 ] 00:05:50.604 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.604 [2024-07-20 18:36:00.814393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.604 [2024-07-20 18:36:00.904377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.604 [2024-07-20 18:36:00.904443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.604 [2024-07-20 18:36:00.904501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.604 [2024-07-20 18:36:00.904504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.863 18:36:00 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:50.863 18:36:00 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:50.863 18:36:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:50.863 18:36:00 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.863 18:36:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.863 POWER: Env isn't set yet! 00:05:50.863 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:50.863 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:05:50.863 POWER: Cannot get available frequencies of lcore 0 00:05:50.863 POWER: Attempting to initialise PSTAT power management... 00:05:50.863 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:50.863 POWER: Initialized successfully for lcore 0 power management 00:05:50.863 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:50.863 POWER: Initialized successfully for lcore 1 power management 00:05:50.863 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:50.863 POWER: Initialized successfully for lcore 2 power management 00:05:50.863 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:50.863 POWER: Initialized successfully for lcore 3 power management 00:05:50.863 [2024-07-20 18:36:01.022987] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:50.863 [2024-07-20 18:36:01.023005] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:50.863 [2024-07-20 18:36:01.023016] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:50.863 18:36:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.863 18:36:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:50.863 18:36:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.863 18:36:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.863 [2024-07-20 18:36:01.121413] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:50.863 18:36:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.863 18:36:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:50.863 18:36:01 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.863 18:36:01 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.863 18:36:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.863 ************************************ 00:05:50.863 START TEST scheduler_create_thread 00:05:50.863 ************************************ 00:05:50.863 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:50.863 18:36:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:50.863 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.863 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.863 2 00:05:50.863 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.863 18:36:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:50.863 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.863 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.863 3 00:05:50.863 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.863 18:36:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:50.863 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.863 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.863 4 00:05:50.863 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.863 18:36:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:50.863 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.864 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.122 5 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.122 6 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.122 7 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.122 8 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.122 9 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.122 10 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.122 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.686 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.686 18:36:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:51.686 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.686 18:36:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.055 18:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.055 18:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:53.055 18:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:53.055 18:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.055 18:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.984 18:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.984 00:05:53.984 real 0m3.099s 00:05:53.984 user 0m0.009s 00:05:53.984 sys 0m0.005s 00:05:53.984 18:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.984 18:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.984 ************************************ 00:05:53.984 END TEST scheduler_create_thread 00:05:53.984 ************************************ 00:05:53.984 18:36:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:53.984 18:36:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1259528 00:05:53.984 18:36:04 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 1259528 ']' 00:05:53.984 18:36:04 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 1259528 00:05:53.984 18:36:04 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:53.984 18:36:04 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:53.984 18:36:04 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1259528 00:05:53.984 18:36:04 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:53.984 18:36:04 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:53.984 18:36:04 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1259528' 00:05:53.984 killing process with pid 1259528 00:05:53.984 18:36:04 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 1259528 00:05:53.984 18:36:04 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 1259528 00:05:54.548 [2024-07-20 18:36:04.625698] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.548 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:05:54.548 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:54.548 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:05:54.548 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:54.548 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:05:54.548 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:54.548 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:05:54.548 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:54.807 00:05:54.807 real 0m4.222s 00:05:54.807 user 0m6.942s 00:05:54.807 sys 0m0.337s 00:05:54.807 18:36:04 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.807 18:36:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.807 ************************************ 00:05:54.807 END TEST event_scheduler 00:05:54.807 ************************************ 00:05:54.807 18:36:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:54.807 18:36:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:54.807 18:36:04 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.807 18:36:04 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.807 18:36:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.807 ************************************ 00:05:54.807 START TEST app_repeat 00:05:54.807 ************************************ 00:05:54.807 18:36:04 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:54.807 18:36:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.807 18:36:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.807 18:36:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:54.807 18:36:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.807 18:36:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:54.807 18:36:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:54.807 18:36:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:54.807 18:36:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1260171 00:05:54.807 18:36:04 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:54.807 18:36:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.807 18:36:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1260171' 00:05:54.807 Process app_repeat pid: 1260171 00:05:54.807 18:36:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.807 18:36:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:54.807 spdk_app_start Round 0 00:05:54.807 18:36:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1260171 /var/tmp/spdk-nbd.sock 00:05:54.807 18:36:04 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1260171 ']' 00:05:54.807 18:36:04 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.807 18:36:04 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.807 18:36:04 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.807 18:36:04 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.807 18:36:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.807 [2024-07-20 18:36:04.948637] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:54.807 [2024-07-20 18:36:04.948705] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260171 ] 00:05:54.807 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.807 [2024-07-20 18:36:05.007138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.807 [2024-07-20 18:36:05.098846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.807 [2024-07-20 18:36:05.098850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.065 18:36:05 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:55.065 18:36:05 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:55.065 18:36:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.358 Malloc0 00:05:55.358 18:36:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.615 Malloc1 00:05:55.615 18:36:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.615 18:36:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.615 18:36:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.615 18:36:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.615 18:36:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.615 18:36:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.615 18:36:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.615 18:36:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.615 18:36:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.615 18:36:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.615 18:36:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.615 18:36:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.615 18:36:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.615 18:36:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.615 18:36:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.615 18:36:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.871 /dev/nbd0 00:05:55.871 18:36:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.871 18:36:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.872 18:36:05 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:55.872 18:36:05 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:55.872 18:36:05 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:55.872 18:36:05 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:55.872 18:36:05 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:55.872 18:36:05 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:55.872 18:36:05 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:55.872 18:36:05 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:55.872 18:36:05 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.872 1+0 records in 00:05:55.872 1+0 records out 00:05:55.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000153546 s, 26.7 MB/s 00:05:55.872 18:36:05 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.872 18:36:05 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:55.872 18:36:05 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.872 18:36:05 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:55.872 18:36:05 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:55.872 18:36:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.872 18:36:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.872 18:36:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.129 /dev/nbd1 00:05:56.129 18:36:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.129 18:36:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.129 18:36:06 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:56.129 18:36:06 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:56.129 18:36:06 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:56.129 18:36:06 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:56.129 18:36:06 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:56.129 18:36:06 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:56.129 18:36:06 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:56.129 18:36:06 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:56.129 18:36:06 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.129 1+0 records in 00:05:56.129 1+0 records out 00:05:56.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212796 s, 19.2 MB/s 00:05:56.129 18:36:06 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.129 18:36:06 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:56.129 18:36:06 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.129 18:36:06 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:56.129 18:36:06 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:56.129 18:36:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.129 18:36:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.129 18:36:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.129 18:36:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.129 18:36:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.386 { 00:05:56.386 "nbd_device": "/dev/nbd0", 00:05:56.386 "bdev_name": "Malloc0" 00:05:56.386 }, 00:05:56.386 { 00:05:56.386 "nbd_device": "/dev/nbd1", 00:05:56.386 "bdev_name": "Malloc1" 00:05:56.386 } 00:05:56.386 ]' 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.386 { 00:05:56.386 "nbd_device": "/dev/nbd0", 00:05:56.386 "bdev_name": "Malloc0" 00:05:56.386 }, 00:05:56.386 { 00:05:56.386 "nbd_device": "/dev/nbd1", 00:05:56.386 "bdev_name": "Malloc1" 00:05:56.386 } 00:05:56.386 ]' 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.386 /dev/nbd1' 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.386 /dev/nbd1' 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.386 256+0 records in 00:05:56.386 256+0 records out 00:05:56.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494919 s, 212 MB/s 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.386 18:36:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.386 256+0 records in 00:05:56.386 256+0 records out 00:05:56.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202564 s, 51.8 MB/s 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.387 256+0 records in 00:05:56.387 256+0 records out 00:05:56.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249866 s, 42.0 MB/s 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.387 18:36:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.644 18:36:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.644 18:36:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.644 18:36:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.644 18:36:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.644 18:36:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.644 18:36:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.644 18:36:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.644 18:36:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.644 18:36:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.644 18:36:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.904 18:36:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.904 18:36:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.904 18:36:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.904 18:36:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.904 18:36:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.904 18:36:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.904 18:36:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.904 18:36:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.904 18:36:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.904 18:36:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.904 18:36:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.162 18:36:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.162 18:36:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.162 18:36:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.162 18:36:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.162 18:36:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.162 18:36:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.162 18:36:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.162 18:36:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.162 18:36:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.162 18:36:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.162 18:36:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.162 18:36:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.162 18:36:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.421 18:36:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.680 [2024-07-20 18:36:07.913605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.680 [2024-07-20 18:36:08.002751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.680 [2024-07-20 18:36:08.002751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.938 [2024-07-20 18:36:08.062359] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.938 [2024-07-20 18:36:08.062432] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.466 18:36:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.466 18:36:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:00.466 spdk_app_start Round 1 00:06:00.466 18:36:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1260171 /var/tmp/spdk-nbd.sock 00:06:00.466 18:36:10 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1260171 ']' 00:06:00.466 18:36:10 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.466 18:36:10 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:00.466 18:36:10 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.466 18:36:10 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:00.466 18:36:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.722 18:36:10 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:00.722 18:36:10 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:00.722 18:36:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.979 Malloc0 00:06:00.979 18:36:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.236 Malloc1 00:06:01.236 18:36:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.236 18:36:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.236 18:36:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.236 18:36:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.236 18:36:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.236 18:36:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.237 18:36:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.237 18:36:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.237 18:36:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.237 18:36:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.237 18:36:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.237 18:36:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.237 18:36:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.237 18:36:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.237 18:36:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.237 18:36:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.494 /dev/nbd0 00:06:01.494 18:36:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.494 18:36:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.494 18:36:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:01.494 18:36:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:01.494 18:36:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:01.494 18:36:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:01.494 18:36:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:01.494 18:36:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:01.494 18:36:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:01.494 18:36:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:01.494 18:36:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.494 1+0 records in 00:06:01.494 1+0 records out 00:06:01.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173854 s, 23.6 MB/s 00:06:01.494 18:36:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.494 18:36:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:01.494 18:36:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.494 18:36:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:01.494 18:36:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:01.494 18:36:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.494 18:36:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.494 18:36:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.750 /dev/nbd1 00:06:01.750 18:36:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.750 18:36:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.750 18:36:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:01.750 18:36:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:01.750 18:36:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:01.750 18:36:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:01.750 18:36:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:01.750 18:36:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:01.750 18:36:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:01.750 18:36:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:01.750 18:36:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.750 1+0 records in 00:06:01.750 1+0 records out 00:06:01.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185524 s, 22.1 MB/s 00:06:01.750 18:36:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.750 18:36:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:01.750 18:36:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.750 18:36:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:01.750 18:36:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:01.750 18:36:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.750 18:36:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.750 18:36:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.750 18:36:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.750 18:36:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.007 { 00:06:02.007 "nbd_device": "/dev/nbd0", 00:06:02.007 "bdev_name": "Malloc0" 00:06:02.007 }, 00:06:02.007 { 00:06:02.007 "nbd_device": "/dev/nbd1", 00:06:02.007 "bdev_name": "Malloc1" 00:06:02.007 } 00:06:02.007 ]' 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.007 { 00:06:02.007 "nbd_device": "/dev/nbd0", 00:06:02.007 "bdev_name": "Malloc0" 00:06:02.007 }, 00:06:02.007 { 00:06:02.007 "nbd_device": "/dev/nbd1", 00:06:02.007 "bdev_name": "Malloc1" 00:06:02.007 } 00:06:02.007 ]' 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.007 /dev/nbd1' 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.007 /dev/nbd1' 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.007 256+0 records in 00:06:02.007 256+0 records out 00:06:02.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507464 s, 207 MB/s 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.007 256+0 records in 00:06:02.007 256+0 records out 00:06:02.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203976 s, 51.4 MB/s 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.007 18:36:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.265 256+0 records in 00:06:02.265 256+0 records out 00:06:02.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250085 s, 41.9 MB/s 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.265 18:36:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.522 18:36:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.522 18:36:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.522 18:36:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.522 18:36:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.522 18:36:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.522 18:36:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.522 18:36:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.522 18:36:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.522 18:36:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.522 18:36:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.779 18:36:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.779 18:36:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.779 18:36:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.779 18:36:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.779 18:36:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.779 18:36:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.779 18:36:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.779 18:36:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.779 18:36:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.779 18:36:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.779 18:36:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.035 18:36:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.035 18:36:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.035 18:36:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.035 18:36:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.035 18:36:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.035 18:36:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.035 18:36:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:03.035 18:36:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.035 18:36:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.035 18:36:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.035 18:36:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.035 18:36:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.035 18:36:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.292 18:36:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.549 [2024-07-20 18:36:13.668150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.549 [2024-07-20 18:36:13.757367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.549 [2024-07-20 18:36:13.757372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.549 [2024-07-20 18:36:13.816741] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.549 [2024-07-20 18:36:13.816820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.828 18:36:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.829 18:36:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:06.829 spdk_app_start Round 2 00:06:06.829 18:36:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1260171 /var/tmp/spdk-nbd.sock 00:06:06.829 18:36:16 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1260171 ']' 00:06:06.829 18:36:16 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.829 18:36:16 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:06.829 18:36:16 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.829 18:36:16 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:06.829 18:36:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.829 18:36:16 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:06.829 18:36:16 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:06.829 18:36:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.829 Malloc0 00:06:06.829 18:36:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.088 Malloc1 00:06:07.088 18:36:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.088 18:36:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.088 18:36:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.088 18:36:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.088 18:36:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.088 18:36:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.088 18:36:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.088 18:36:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.088 18:36:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.088 18:36:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.088 18:36:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.088 18:36:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.088 18:36:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:07.088 18:36:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.088 18:36:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.088 18:36:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.346 /dev/nbd0 00:06:07.346 18:36:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.346 18:36:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.346 18:36:17 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:07.346 18:36:17 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:07.346 18:36:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:07.346 18:36:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:07.346 18:36:17 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:07.346 18:36:17 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:07.346 18:36:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:07.346 18:36:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:07.346 18:36:17 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.346 1+0 records in 00:06:07.346 1+0 records out 00:06:07.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175872 s, 23.3 MB/s 00:06:07.346 18:36:17 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.346 18:36:17 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:07.346 18:36:17 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.346 18:36:17 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:07.346 18:36:17 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:07.346 18:36:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.346 18:36:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.346 18:36:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.604 /dev/nbd1 00:06:07.604 18:36:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.604 18:36:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.604 18:36:17 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:07.604 18:36:17 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:07.604 18:36:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:07.604 18:36:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:07.604 18:36:17 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:07.604 18:36:17 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:07.604 18:36:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:07.604 18:36:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:07.604 18:36:17 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.604 1+0 records in 00:06:07.604 1+0 records out 00:06:07.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211102 s, 19.4 MB/s 00:06:07.604 18:36:17 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.604 18:36:17 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:07.604 18:36:17 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.604 18:36:17 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:07.604 18:36:17 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:07.604 18:36:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.604 18:36:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.604 18:36:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.604 18:36:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.604 18:36:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.863 18:36:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.863 { 00:06:07.863 "nbd_device": "/dev/nbd0", 00:06:07.863 "bdev_name": "Malloc0" 00:06:07.863 }, 00:06:07.863 { 00:06:07.863 "nbd_device": "/dev/nbd1", 00:06:07.863 "bdev_name": "Malloc1" 00:06:07.863 } 00:06:07.863 ]' 00:06:07.863 18:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.863 { 00:06:07.863 "nbd_device": "/dev/nbd0", 00:06:07.863 "bdev_name": "Malloc0" 00:06:07.863 }, 00:06:07.863 { 00:06:07.863 "nbd_device": "/dev/nbd1", 00:06:07.863 "bdev_name": "Malloc1" 00:06:07.863 } 00:06:07.863 ]' 00:06:07.863 18:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.863 18:36:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.863 /dev/nbd1' 00:06:07.863 18:36:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.863 /dev/nbd1' 00:06:07.863 18:36:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.863 18:36:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.863 18:36:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.863 18:36:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.863 18:36:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.863 18:36:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.863 18:36:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.863 18:36:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.863 18:36:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.863 18:36:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.863 18:36:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.863 18:36:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.864 256+0 records in 00:06:07.864 256+0 records out 00:06:07.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500187 s, 210 MB/s 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.864 256+0 records in 00:06:07.864 256+0 records out 00:06:07.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234474 s, 44.7 MB/s 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.864 256+0 records in 00:06:07.864 256+0 records out 00:06:07.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244187 s, 42.9 MB/s 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.864 18:36:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.122 18:36:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.122 18:36:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.122 18:36:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.122 18:36:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.122 18:36:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.122 18:36:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.122 18:36:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.122 18:36:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.122 18:36:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.122 18:36:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.380 18:36:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.380 18:36:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.380 18:36:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.380 18:36:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.380 18:36:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.380 18:36:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.380 18:36:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.380 18:36:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.380 18:36:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.380 18:36:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.380 18:36:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.638 18:36:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.638 18:36:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.638 18:36:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.638 18:36:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.638 18:36:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.638 18:36:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.638 18:36:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.638 18:36:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.638 18:36:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.638 18:36:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.638 18:36:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.638 18:36:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.638 18:36:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.896 18:36:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.155 [2024-07-20 18:36:19.398043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.414 [2024-07-20 18:36:19.486186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.414 [2024-07-20 18:36:19.486186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.414 [2024-07-20 18:36:19.548361] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.414 [2024-07-20 18:36:19.548441] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.938 18:36:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1260171 /var/tmp/spdk-nbd.sock 00:06:11.938 18:36:22 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1260171 ']' 00:06:11.938 18:36:22 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.938 18:36:22 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:11.938 18:36:22 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.938 18:36:22 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:11.938 18:36:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.194 18:36:22 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.194 18:36:22 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:12.194 18:36:22 event.app_repeat -- event/event.sh@39 -- # killprocess 1260171 00:06:12.194 18:36:22 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 1260171 ']' 00:06:12.194 18:36:22 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 1260171 00:06:12.194 18:36:22 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:12.194 18:36:22 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:12.194 18:36:22 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1260171 00:06:12.194 18:36:22 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:12.194 18:36:22 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:12.194 18:36:22 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1260171' 00:06:12.194 killing process with pid 1260171 00:06:12.194 18:36:22 event.app_repeat -- common/autotest_common.sh@965 -- # kill 1260171 00:06:12.194 18:36:22 event.app_repeat -- common/autotest_common.sh@970 -- # wait 1260171 00:06:12.452 spdk_app_start is called in Round 0. 00:06:12.452 Shutdown signal received, stop current app iteration 00:06:12.452 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:12.452 spdk_app_start is called in Round 1. 00:06:12.452 Shutdown signal received, stop current app iteration 00:06:12.452 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:12.452 spdk_app_start is called in Round 2. 00:06:12.452 Shutdown signal received, stop current app iteration 00:06:12.452 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:12.452 spdk_app_start is called in Round 3. 00:06:12.452 Shutdown signal received, stop current app iteration 00:06:12.452 18:36:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:12.452 18:36:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:12.452 00:06:12.452 real 0m17.719s 00:06:12.452 user 0m39.084s 00:06:12.452 sys 0m3.258s 00:06:12.452 18:36:22 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.452 18:36:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.452 ************************************ 00:06:12.452 END TEST app_repeat 00:06:12.452 ************************************ 00:06:12.452 18:36:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:12.452 18:36:22 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:12.452 18:36:22 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.452 18:36:22 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.452 18:36:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.452 ************************************ 00:06:12.452 START TEST cpu_locks 00:06:12.452 ************************************ 00:06:12.452 18:36:22 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:12.452 * Looking for test storage... 00:06:12.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:12.452 18:36:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:12.452 18:36:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:12.452 18:36:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:12.452 18:36:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:12.452 18:36:22 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.452 18:36:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.452 18:36:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.452 ************************************ 00:06:12.452 START TEST default_locks 00:06:12.452 ************************************ 00:06:12.452 18:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:12.452 18:36:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1263023 00:06:12.452 18:36:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.452 18:36:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1263023 00:06:12.452 18:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1263023 ']' 00:06:12.452 18:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.452 18:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.452 18:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.452 18:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.452 18:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.710 [2024-07-20 18:36:22.807921] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:12.710 [2024-07-20 18:36:22.808007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263023 ] 00:06:12.710 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.710 [2024-07-20 18:36:22.868866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.710 [2024-07-20 18:36:22.952259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.991 18:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.991 18:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:12.991 18:36:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1263023 00:06:12.991 18:36:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1263023 00:06:12.991 18:36:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.554 lslocks: write error 00:06:13.554 18:36:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1263023 00:06:13.554 18:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 1263023 ']' 00:06:13.554 18:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 1263023 00:06:13.554 18:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:13.554 18:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:13.554 18:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1263023 00:06:13.554 18:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:13.554 18:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:13.554 18:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1263023' 00:06:13.554 killing process with pid 1263023 00:06:13.554 18:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 1263023 00:06:13.554 18:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 1263023 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1263023 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1263023 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1263023 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1263023 ']' 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1263023) - No such process 00:06:13.812 ERROR: process (pid: 1263023) is no longer running 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.812 00:06:13.812 real 0m1.302s 00:06:13.812 user 0m1.206s 00:06:13.812 sys 0m0.568s 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.812 18:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.812 ************************************ 00:06:13.812 END TEST default_locks 00:06:13.812 ************************************ 00:06:13.812 18:36:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:13.812 18:36:24 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.812 18:36:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.812 18:36:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.812 ************************************ 00:06:13.812 START TEST default_locks_via_rpc 00:06:13.812 ************************************ 00:06:13.812 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:13.812 18:36:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1263192 00:06:13.812 18:36:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.812 18:36:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1263192 00:06:13.812 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1263192 ']' 00:06:13.812 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.812 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.812 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.812 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.812 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.070 [2024-07-20 18:36:24.162176] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:14.070 [2024-07-20 18:36:24.162259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263192 ] 00:06:14.070 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.070 [2024-07-20 18:36:24.219648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.070 [2024-07-20 18:36:24.308105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1263192 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1263192 00:06:14.328 18:36:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.586 18:36:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1263192 00:06:14.586 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 1263192 ']' 00:06:14.586 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 1263192 00:06:14.586 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:14.586 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:14.586 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1263192 00:06:14.843 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:14.843 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:14.843 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1263192' 00:06:14.843 killing process with pid 1263192 00:06:14.843 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 1263192 00:06:14.843 18:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 1263192 00:06:15.110 00:06:15.110 real 0m1.196s 00:06:15.110 user 0m1.125s 00:06:15.110 sys 0m0.531s 00:06:15.110 18:36:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.110 18:36:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.110 ************************************ 00:06:15.110 END TEST default_locks_via_rpc 00:06:15.110 ************************************ 00:06:15.110 18:36:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:15.110 18:36:25 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.110 18:36:25 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.110 18:36:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.110 ************************************ 00:06:15.110 START TEST non_locking_app_on_locked_coremask 00:06:15.110 ************************************ 00:06:15.110 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:15.111 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1263354 00:06:15.111 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.111 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1263354 /var/tmp/spdk.sock 00:06:15.111 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1263354 ']' 00:06:15.111 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.111 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.111 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.111 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.111 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.111 [2024-07-20 18:36:25.406494] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:15.111 [2024-07-20 18:36:25.406575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263354 ] 00:06:15.368 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.368 [2024-07-20 18:36:25.465395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.368 [2024-07-20 18:36:25.553964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.626 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.626 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:15.626 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1263357 00:06:15.626 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:15.626 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1263357 /var/tmp/spdk2.sock 00:06:15.626 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1263357 ']' 00:06:15.626 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.626 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.626 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.626 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.626 18:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.626 [2024-07-20 18:36:25.853001] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:15.626 [2024-07-20 18:36:25.853072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263357 ] 00:06:15.626 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.626 [2024-07-20 18:36:25.945940] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.626 [2024-07-20 18:36:25.945974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.884 [2024-07-20 18:36:26.135183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.472 18:36:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:16.472 18:36:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:16.472 18:36:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1263354 00:06:16.472 18:36:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.472 18:36:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1263354 00:06:17.049 lslocks: write error 00:06:17.049 18:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1263354 00:06:17.049 18:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1263354 ']' 00:06:17.049 18:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1263354 00:06:17.049 18:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:17.049 18:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:17.049 18:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1263354 00:06:17.049 18:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:17.049 18:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:17.049 18:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1263354' 00:06:17.049 killing process with pid 1263354 00:06:17.049 18:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1263354 00:06:17.049 18:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1263354 00:06:17.981 18:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1263357 00:06:17.981 18:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1263357 ']' 00:06:17.981 18:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1263357 00:06:17.981 18:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:17.981 18:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:17.981 18:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1263357 00:06:17.981 18:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:17.981 18:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:17.981 18:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1263357' 00:06:17.981 killing process with pid 1263357 00:06:17.981 18:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1263357 00:06:17.981 18:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1263357 00:06:18.239 00:06:18.239 real 0m3.078s 00:06:18.239 user 0m3.221s 00:06:18.239 sys 0m1.030s 00:06:18.239 18:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.239 18:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.239 ************************************ 00:06:18.239 END TEST non_locking_app_on_locked_coremask 00:06:18.239 ************************************ 00:06:18.239 18:36:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:18.239 18:36:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.239 18:36:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.239 18:36:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.239 ************************************ 00:06:18.239 START TEST locking_app_on_unlocked_coremask 00:06:18.239 ************************************ 00:06:18.239 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:18.239 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1263789 00:06:18.239 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:18.239 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1263789 /var/tmp/spdk.sock 00:06:18.239 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1263789 ']' 00:06:18.239 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.239 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.239 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.239 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.239 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.239 [2024-07-20 18:36:28.534083] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:18.239 [2024-07-20 18:36:28.534195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263789 ] 00:06:18.239 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.497 [2024-07-20 18:36:28.597566] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.497 [2024-07-20 18:36:28.597610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.497 [2024-07-20 18:36:28.686363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.756 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.756 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:18.756 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1263792 00:06:18.756 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.756 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1263792 /var/tmp/spdk2.sock 00:06:18.756 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1263792 ']' 00:06:18.756 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.756 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.756 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.756 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.756 18:36:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.756 [2024-07-20 18:36:28.995984] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:18.756 [2024-07-20 18:36:28.996054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263792 ] 00:06:18.756 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.014 [2024-07-20 18:36:29.085290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.014 [2024-07-20 18:36:29.263834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.945 18:36:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.945 18:36:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:19.945 18:36:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1263792 00:06:19.945 18:36:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1263792 00:06:19.945 18:36:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.511 lslocks: write error 00:06:20.511 18:36:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1263789 00:06:20.511 18:36:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1263789 ']' 00:06:20.511 18:36:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1263789 00:06:20.511 18:36:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:20.511 18:36:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:20.511 18:36:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1263789 00:06:20.511 18:36:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:20.511 18:36:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:20.511 18:36:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1263789' 00:06:20.511 killing process with pid 1263789 00:06:20.511 18:36:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1263789 00:06:20.511 18:36:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1263789 00:06:21.076 18:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1263792 00:06:21.076 18:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1263792 ']' 00:06:21.076 18:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1263792 00:06:21.076 18:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:21.076 18:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:21.076 18:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1263792 00:06:21.333 18:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:21.333 18:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:21.333 18:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1263792' 00:06:21.333 killing process with pid 1263792 00:06:21.333 18:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1263792 00:06:21.333 18:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1263792 00:06:21.592 00:06:21.592 real 0m3.326s 00:06:21.592 user 0m3.455s 00:06:21.592 sys 0m1.041s 00:06:21.592 18:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.592 18:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.592 ************************************ 00:06:21.592 END TEST locking_app_on_unlocked_coremask 00:06:21.592 ************************************ 00:06:21.592 18:36:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:21.592 18:36:31 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.592 18:36:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.592 18:36:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.592 ************************************ 00:06:21.592 START TEST locking_app_on_locked_coremask 00:06:21.592 ************************************ 00:06:21.592 18:36:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:21.592 18:36:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1264222 00:06:21.592 18:36:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.592 18:36:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1264222 /var/tmp/spdk.sock 00:06:21.592 18:36:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1264222 ']' 00:06:21.592 18:36:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.592 18:36:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.593 18:36:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.593 18:36:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.593 18:36:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.593 [2024-07-20 18:36:31.905525] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:21.593 [2024-07-20 18:36:31.905627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264222 ] 00:06:21.851 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.851 [2024-07-20 18:36:31.968646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.851 [2024-07-20 18:36:32.057506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.110 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.110 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:22.110 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1264232 00:06:22.110 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:22.110 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1264232 /var/tmp/spdk2.sock 00:06:22.110 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:22.110 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1264232 /var/tmp/spdk2.sock 00:06:22.110 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:22.110 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.110 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:22.110 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.110 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1264232 /var/tmp/spdk2.sock 00:06:22.110 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1264232 ']' 00:06:22.110 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.110 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:22.111 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.111 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:22.111 18:36:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.111 [2024-07-20 18:36:32.369229] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:22.111 [2024-07-20 18:36:32.369330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264232 ] 00:06:22.111 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.368 [2024-07-20 18:36:32.468262] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1264222 has claimed it. 00:06:22.368 [2024-07-20 18:36:32.468336] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:22.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1264232) - No such process 00:06:22.934 ERROR: process (pid: 1264232) is no longer running 00:06:22.934 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.934 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:22.934 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:22.934 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.934 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.934 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.934 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1264222 00:06:22.934 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1264222 00:06:22.934 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.498 lslocks: write error 00:06:23.498 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1264222 00:06:23.498 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1264222 ']' 00:06:23.498 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1264222 00:06:23.498 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:23.498 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:23.498 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1264222 00:06:23.498 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:23.498 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:23.498 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1264222' 00:06:23.498 killing process with pid 1264222 00:06:23.498 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1264222 00:06:23.498 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1264222 00:06:23.756 00:06:23.756 real 0m2.127s 00:06:23.756 user 0m2.255s 00:06:23.756 sys 0m0.681s 00:06:23.756 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.756 18:36:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.756 ************************************ 00:06:23.756 END TEST locking_app_on_locked_coremask 00:06:23.756 ************************************ 00:06:23.756 18:36:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:23.756 18:36:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.756 18:36:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.756 18:36:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.756 ************************************ 00:06:23.756 START TEST locking_overlapped_coremask 00:06:23.756 ************************************ 00:06:23.756 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:23.756 18:36:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1264517 00:06:23.756 18:36:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:23.756 18:36:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1264517 /var/tmp/spdk.sock 00:06:23.756 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1264517 ']' 00:06:23.756 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.756 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.756 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.756 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.756 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.756 [2024-07-20 18:36:34.078241] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:23.756 [2024-07-20 18:36:34.078332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264517 ] 00:06:24.014 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.014 [2024-07-20 18:36:34.140389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.014 [2024-07-20 18:36:34.232472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.014 [2024-07-20 18:36:34.232535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.014 [2024-07-20 18:36:34.232538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.271 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.271 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:24.271 18:36:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1264532 00:06:24.272 18:36:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1264532 /var/tmp/spdk2.sock 00:06:24.272 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:24.272 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1264532 /var/tmp/spdk2.sock 00:06:24.272 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:24.272 18:36:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:24.272 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.272 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:24.272 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.272 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1264532 /var/tmp/spdk2.sock 00:06:24.272 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1264532 ']' 00:06:24.272 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.272 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:24.272 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.272 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:24.272 18:36:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.272 [2024-07-20 18:36:34.534951] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:24.272 [2024-07-20 18:36:34.535045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264532 ] 00:06:24.272 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.529 [2024-07-20 18:36:34.623941] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1264517 has claimed it. 00:06:24.529 [2024-07-20 18:36:34.623991] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1264532) - No such process 00:06:25.094 ERROR: process (pid: 1264532) is no longer running 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1264517 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 1264517 ']' 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 1264517 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1264517 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1264517' 00:06:25.094 killing process with pid 1264517 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 1264517 00:06:25.094 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 1264517 00:06:25.372 00:06:25.372 real 0m1.628s 00:06:25.372 user 0m4.388s 00:06:25.372 sys 0m0.438s 00:06:25.372 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.372 18:36:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.372 ************************************ 00:06:25.372 END TEST locking_overlapped_coremask 00:06:25.373 ************************************ 00:06:25.373 18:36:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:25.373 18:36:35 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.373 18:36:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.373 18:36:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.630 ************************************ 00:06:25.630 START TEST locking_overlapped_coremask_via_rpc 00:06:25.630 ************************************ 00:06:25.630 18:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:25.630 18:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1264694 00:06:25.630 18:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1264694 /var/tmp/spdk.sock 00:06:25.630 18:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:25.630 18:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1264694 ']' 00:06:25.630 18:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.630 18:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.630 18:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.630 18:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.630 18:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.630 [2024-07-20 18:36:35.757561] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:25.630 [2024-07-20 18:36:35.757671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264694 ] 00:06:25.630 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.630 [2024-07-20 18:36:35.821545] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.630 [2024-07-20 18:36:35.821592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.630 [2024-07-20 18:36:35.911011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.630 [2024-07-20 18:36:35.911081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.630 [2024-07-20 18:36:35.911083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.887 18:36:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.887 18:36:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:25.887 18:36:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1264739 00:06:25.887 18:36:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1264739 /var/tmp/spdk2.sock 00:06:25.887 18:36:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:25.888 18:36:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1264739 ']' 00:06:25.888 18:36:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.888 18:36:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.888 18:36:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.888 18:36:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.888 18:36:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.144 [2024-07-20 18:36:36.220319] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:26.144 [2024-07-20 18:36:36.220414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264739 ] 00:06:26.144 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.144 [2024-07-20 18:36:36.311370] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.144 [2024-07-20 18:36:36.311411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.400 [2024-07-20 18:36:36.487659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.400 [2024-07-20 18:36:36.490850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:26.400 [2024-07-20 18:36:36.490853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.964 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.964 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:26.964 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.965 [2024-07-20 18:36:37.172892] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1264694 has claimed it. 00:06:26.965 request: 00:06:26.965 { 00:06:26.965 "method": "framework_enable_cpumask_locks", 00:06:26.965 "req_id": 1 00:06:26.965 } 00:06:26.965 Got JSON-RPC error response 00:06:26.965 response: 00:06:26.965 { 00:06:26.965 "code": -32603, 00:06:26.965 "message": "Failed to claim CPU core: 2" 00:06:26.965 } 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1264694 /var/tmp/spdk.sock 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1264694 ']' 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.965 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.222 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.222 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:27.222 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1264739 /var/tmp/spdk2.sock 00:06:27.222 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1264739 ']' 00:06:27.222 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.222 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.222 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.222 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.222 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.479 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.479 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:27.479 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:27.479 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.479 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.479 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.479 00:06:27.479 real 0m1.982s 00:06:27.479 user 0m1.031s 00:06:27.479 sys 0m0.165s 00:06:27.479 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.479 18:36:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.479 ************************************ 00:06:27.479 END TEST locking_overlapped_coremask_via_rpc 00:06:27.479 ************************************ 00:06:27.479 18:36:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:27.479 18:36:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1264694 ]] 00:06:27.479 18:36:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1264694 00:06:27.479 18:36:37 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1264694 ']' 00:06:27.479 18:36:37 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1264694 00:06:27.479 18:36:37 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:27.479 18:36:37 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:27.479 18:36:37 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1264694 00:06:27.479 18:36:37 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:27.479 18:36:37 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:27.479 18:36:37 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1264694' 00:06:27.479 killing process with pid 1264694 00:06:27.479 18:36:37 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1264694 00:06:27.479 18:36:37 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1264694 00:06:28.060 18:36:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1264739 ]] 00:06:28.060 18:36:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1264739 00:06:28.060 18:36:38 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1264739 ']' 00:06:28.060 18:36:38 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1264739 00:06:28.060 18:36:38 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:28.060 18:36:38 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:28.060 18:36:38 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1264739 00:06:28.060 18:36:38 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:28.060 18:36:38 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:28.060 18:36:38 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1264739' 00:06:28.060 killing process with pid 1264739 00:06:28.060 18:36:38 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1264739 00:06:28.060 18:36:38 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1264739 00:06:28.318 18:36:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.318 18:36:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:28.318 18:36:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1264694 ]] 00:06:28.318 18:36:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1264694 00:06:28.318 18:36:38 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1264694 ']' 00:06:28.318 18:36:38 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1264694 00:06:28.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1264694) - No such process 00:06:28.318 18:36:38 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1264694 is not found' 00:06:28.318 Process with pid 1264694 is not found 00:06:28.318 18:36:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1264739 ]] 00:06:28.318 18:36:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1264739 00:06:28.318 18:36:38 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1264739 ']' 00:06:28.318 18:36:38 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1264739 00:06:28.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1264739) - No such process 00:06:28.318 18:36:38 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1264739 is not found' 00:06:28.318 Process with pid 1264739 is not found 00:06:28.318 18:36:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.318 00:06:28.318 real 0m15.881s 00:06:28.318 user 0m27.486s 00:06:28.318 sys 0m5.354s 00:06:28.318 18:36:38 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.318 18:36:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.318 ************************************ 00:06:28.318 END TEST cpu_locks 00:06:28.318 ************************************ 00:06:28.318 00:06:28.318 real 0m41.914s 00:06:28.318 user 1m20.127s 00:06:28.318 sys 0m9.431s 00:06:28.318 18:36:38 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.318 18:36:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.318 ************************************ 00:06:28.318 END TEST event 00:06:28.318 ************************************ 00:06:28.319 18:36:38 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:28.319 18:36:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.319 18:36:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.319 18:36:38 -- common/autotest_common.sh@10 -- # set +x 00:06:28.319 ************************************ 00:06:28.319 START TEST thread 00:06:28.319 ************************************ 00:06:28.319 18:36:38 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:28.581 * Looking for test storage... 00:06:28.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:28.581 18:36:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.581 18:36:38 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:28.581 18:36:38 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.581 18:36:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.581 ************************************ 00:06:28.581 START TEST thread_poller_perf 00:06:28.581 ************************************ 00:06:28.581 18:36:38 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.581 [2024-07-20 18:36:38.728443] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:28.581 [2024-07-20 18:36:38.728507] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265193 ] 00:06:28.581 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.581 [2024-07-20 18:36:38.791615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.581 [2024-07-20 18:36:38.880825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.581 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:29.953 ====================================== 00:06:29.953 busy:2712590203 (cyc) 00:06:29.953 total_run_count: 296000 00:06:29.953 tsc_hz: 2700000000 (cyc) 00:06:29.953 ====================================== 00:06:29.953 poller_cost: 9164 (cyc), 3394 (nsec) 00:06:29.953 00:06:29.953 real 0m1.256s 00:06:29.953 user 0m1.172s 00:06:29.953 sys 0m0.078s 00:06:29.953 18:36:39 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.953 18:36:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.953 ************************************ 00:06:29.953 END TEST thread_poller_perf 00:06:29.953 ************************************ 00:06:29.953 18:36:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:29.953 18:36:39 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:29.953 18:36:39 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.953 18:36:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.953 ************************************ 00:06:29.953 START TEST thread_poller_perf 00:06:29.953 ************************************ 00:06:29.953 18:36:40 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:29.953 [2024-07-20 18:36:40.036404] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:29.953 [2024-07-20 18:36:40.036473] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265347 ] 00:06:29.953 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.953 [2024-07-20 18:36:40.100297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.953 [2024-07-20 18:36:40.194131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.953 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:31.327 ====================================== 00:06:31.327 busy:2702450034 (cyc) 00:06:31.327 total_run_count: 3856000 00:06:31.327 tsc_hz: 2700000000 (cyc) 00:06:31.327 ====================================== 00:06:31.327 poller_cost: 700 (cyc), 259 (nsec) 00:06:31.327 00:06:31.327 real 0m1.256s 00:06:31.327 user 0m1.162s 00:06:31.327 sys 0m0.088s 00:06:31.327 18:36:41 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.327 18:36:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:31.327 ************************************ 00:06:31.327 END TEST thread_poller_perf 00:06:31.327 ************************************ 00:06:31.327 18:36:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:31.327 00:06:31.327 real 0m2.662s 00:06:31.327 user 0m2.393s 00:06:31.327 sys 0m0.269s 00:06:31.327 18:36:41 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.327 18:36:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.327 ************************************ 00:06:31.327 END TEST thread 00:06:31.327 ************************************ 00:06:31.327 18:36:41 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:31.327 18:36:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.327 18:36:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.327 18:36:41 -- common/autotest_common.sh@10 -- # set +x 00:06:31.327 ************************************ 00:06:31.327 START TEST accel 00:06:31.327 ************************************ 00:06:31.327 18:36:41 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:31.327 * Looking for test storage... 00:06:31.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:31.327 18:36:41 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:31.327 18:36:41 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:31.328 18:36:41 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:31.328 18:36:41 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1265546 00:06:31.328 18:36:41 accel -- accel/accel.sh@63 -- # waitforlisten 1265546 00:06:31.328 18:36:41 accel -- common/autotest_common.sh@827 -- # '[' -z 1265546 ']' 00:06:31.328 18:36:41 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:31.328 18:36:41 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.328 18:36:41 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:31.328 18:36:41 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.328 18:36:41 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.328 18:36:41 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.328 18:36:41 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.328 18:36:41 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.328 18:36:41 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.328 18:36:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.328 18:36:41 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.328 18:36:41 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.328 18:36:41 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:31.328 18:36:41 accel -- accel/accel.sh@41 -- # jq -r . 00:06:31.328 [2024-07-20 18:36:41.456572] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:31.328 [2024-07-20 18:36:41.456657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265546 ] 00:06:31.328 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.328 [2024-07-20 18:36:41.515537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.328 [2024-07-20 18:36:41.604903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.587 18:36:41 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.587 18:36:41 accel -- common/autotest_common.sh@860 -- # return 0 00:06:31.587 18:36:41 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:31.587 18:36:41 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:31.587 18:36:41 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:31.587 18:36:41 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:31.587 18:36:41 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:31.587 18:36:41 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:31.587 18:36:41 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.587 18:36:41 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:31.587 18:36:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.587 18:36:41 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.587 18:36:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.587 18:36:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.587 18:36:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.587 18:36:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.587 18:36:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.587 18:36:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.587 18:36:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.587 18:36:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.587 18:36:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.587 18:36:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.587 18:36:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.587 18:36:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.587 18:36:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.587 18:36:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.587 18:36:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.587 18:36:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.587 18:36:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.587 18:36:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.587 18:36:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.587 18:36:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.587 18:36:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.587 18:36:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.587 18:36:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.587 18:36:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.587 18:36:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.587 18:36:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.587 18:36:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.587 18:36:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.587 18:36:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.587 18:36:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.587 18:36:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.587 18:36:41 accel -- accel/accel.sh@75 -- # killprocess 1265546 00:06:31.587 18:36:41 accel -- common/autotest_common.sh@946 -- # '[' -z 1265546 ']' 00:06:31.587 18:36:41 accel -- common/autotest_common.sh@950 -- # kill -0 1265546 00:06:31.587 18:36:41 accel -- common/autotest_common.sh@951 -- # uname 00:06:31.587 18:36:41 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:31.587 18:36:41 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1265546 00:06:31.846 18:36:41 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:31.846 18:36:41 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:31.846 18:36:41 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1265546' 00:06:31.846 killing process with pid 1265546 00:06:31.846 18:36:41 accel -- common/autotest_common.sh@965 -- # kill 1265546 00:06:31.846 18:36:41 accel -- common/autotest_common.sh@970 -- # wait 1265546 00:06:32.105 18:36:42 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:32.105 18:36:42 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:32.105 18:36:42 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:32.105 18:36:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.105 18:36:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.105 18:36:42 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:32.105 18:36:42 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:32.105 18:36:42 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:32.105 18:36:42 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.105 18:36:42 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.105 18:36:42 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.105 18:36:42 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.105 18:36:42 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.105 18:36:42 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:32.105 18:36:42 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:32.105 18:36:42 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.105 18:36:42 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:32.105 18:36:42 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:32.105 18:36:42 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:32.105 18:36:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.105 18:36:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.105 ************************************ 00:06:32.105 START TEST accel_missing_filename 00:06:32.105 ************************************ 00:06:32.105 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:32.105 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:32.105 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:32.105 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:32.105 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.105 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:32.105 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.105 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:32.364 18:36:42 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:32.364 18:36:42 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:32.364 18:36:42 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.364 18:36:42 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.364 18:36:42 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.364 18:36:42 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.364 18:36:42 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.364 18:36:42 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:32.364 18:36:42 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:32.364 [2024-07-20 18:36:42.443898] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:32.364 [2024-07-20 18:36:42.443962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265714 ] 00:06:32.364 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.364 [2024-07-20 18:36:42.506860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.364 [2024-07-20 18:36:42.599158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.364 [2024-07-20 18:36:42.660930] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.622 [2024-07-20 18:36:42.746208] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:32.622 A filename is required. 00:06:32.622 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:32.622 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.622 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:32.622 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:32.622 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:32.622 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.622 00:06:32.622 real 0m0.404s 00:06:32.622 user 0m0.287s 00:06:32.622 sys 0m0.151s 00:06:32.622 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.622 18:36:42 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:32.622 ************************************ 00:06:32.622 END TEST accel_missing_filename 00:06:32.622 ************************************ 00:06:32.622 18:36:42 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:32.622 18:36:42 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:32.622 18:36:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.622 18:36:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.622 ************************************ 00:06:32.622 START TEST accel_compress_verify 00:06:32.622 ************************************ 00:06:32.622 18:36:42 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:32.622 18:36:42 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:32.622 18:36:42 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:32.622 18:36:42 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:32.622 18:36:42 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.622 18:36:42 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:32.622 18:36:42 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.622 18:36:42 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:32.622 18:36:42 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:32.622 18:36:42 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:32.622 18:36:42 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.622 18:36:42 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.622 18:36:42 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.622 18:36:42 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.622 18:36:42 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.622 18:36:42 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:32.622 18:36:42 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:32.622 [2024-07-20 18:36:42.894827] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:32.622 [2024-07-20 18:36:42.894889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265735 ] 00:06:32.622 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.879 [2024-07-20 18:36:42.958862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.879 [2024-07-20 18:36:43.049975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.879 [2024-07-20 18:36:43.108303] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.879 [2024-07-20 18:36:43.187140] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:33.137 00:06:33.137 Compression does not support the verify option, aborting. 00:06:33.137 18:36:43 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:33.137 18:36:43 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.137 18:36:43 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:33.137 18:36:43 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:33.137 18:36:43 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:33.137 18:36:43 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.137 00:06:33.137 real 0m0.391s 00:06:33.137 user 0m0.279s 00:06:33.137 sys 0m0.142s 00:06:33.137 18:36:43 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.137 18:36:43 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:33.137 ************************************ 00:06:33.137 END TEST accel_compress_verify 00:06:33.137 ************************************ 00:06:33.137 18:36:43 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:33.137 18:36:43 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:33.137 18:36:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.137 18:36:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.137 ************************************ 00:06:33.137 START TEST accel_wrong_workload 00:06:33.137 ************************************ 00:06:33.137 18:36:43 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:33.137 18:36:43 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:33.137 18:36:43 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:33.137 18:36:43 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:33.137 18:36:43 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.137 18:36:43 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:33.137 18:36:43 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.137 18:36:43 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:33.137 18:36:43 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:33.137 18:36:43 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:33.137 18:36:43 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.137 18:36:43 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.137 18:36:43 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.137 18:36:43 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.137 18:36:43 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.137 18:36:43 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:33.137 18:36:43 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:33.137 Unsupported workload type: foobar 00:06:33.137 [2024-07-20 18:36:43.334816] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:33.137 accel_perf options: 00:06:33.137 [-h help message] 00:06:33.137 [-q queue depth per core] 00:06:33.137 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:33.137 [-T number of threads per core 00:06:33.137 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:33.137 [-t time in seconds] 00:06:33.137 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:33.137 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:33.137 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:33.137 [-l for compress/decompress workloads, name of uncompressed input file 00:06:33.137 [-S for crc32c workload, use this seed value (default 0) 00:06:33.137 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:33.137 [-f for fill workload, use this BYTE value (default 255) 00:06:33.137 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:33.137 [-y verify result if this switch is on] 00:06:33.137 [-a tasks to allocate per core (default: same value as -q)] 00:06:33.137 Can be used to spread operations across a wider range of memory. 00:06:33.137 18:36:43 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:33.137 18:36:43 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.137 18:36:43 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:33.137 18:36:43 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.137 00:06:33.137 real 0m0.023s 00:06:33.137 user 0m0.012s 00:06:33.137 sys 0m0.012s 00:06:33.137 18:36:43 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.137 18:36:43 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:33.137 ************************************ 00:06:33.137 END TEST accel_wrong_workload 00:06:33.137 ************************************ 00:06:33.137 Error: writing output failed: Broken pipe 00:06:33.137 18:36:43 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:33.137 18:36:43 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:33.137 18:36:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.137 18:36:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.137 ************************************ 00:06:33.137 START TEST accel_negative_buffers 00:06:33.137 ************************************ 00:06:33.137 18:36:43 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:33.137 18:36:43 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:33.137 18:36:43 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:33.137 18:36:43 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:33.137 18:36:43 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.137 18:36:43 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:33.137 18:36:43 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.137 18:36:43 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:33.137 18:36:43 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:33.137 18:36:43 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:33.137 18:36:43 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.137 18:36:43 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.137 18:36:43 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.137 18:36:43 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.137 18:36:43 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.137 18:36:43 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:33.137 18:36:43 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:33.137 -x option must be non-negative. 00:06:33.137 [2024-07-20 18:36:43.403663] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:33.137 accel_perf options: 00:06:33.137 [-h help message] 00:06:33.137 [-q queue depth per core] 00:06:33.137 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:33.137 [-T number of threads per core 00:06:33.137 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:33.137 [-t time in seconds] 00:06:33.137 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:33.137 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:33.137 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:33.137 [-l for compress/decompress workloads, name of uncompressed input file 00:06:33.137 [-S for crc32c workload, use this seed value (default 0) 00:06:33.137 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:33.137 [-f for fill workload, use this BYTE value (default 255) 00:06:33.137 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:33.137 [-y verify result if this switch is on] 00:06:33.137 [-a tasks to allocate per core (default: same value as -q)] 00:06:33.137 Can be used to spread operations across a wider range of memory. 00:06:33.137 18:36:43 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:33.137 18:36:43 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.137 18:36:43 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:33.137 18:36:43 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.137 00:06:33.137 real 0m0.023s 00:06:33.137 user 0m0.015s 00:06:33.137 sys 0m0.009s 00:06:33.137 18:36:43 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.137 18:36:43 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:33.137 ************************************ 00:06:33.137 END TEST accel_negative_buffers 00:06:33.137 ************************************ 00:06:33.137 Error: writing output failed: Broken pipe 00:06:33.137 18:36:43 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:33.137 18:36:43 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:33.137 18:36:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.137 18:36:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.137 ************************************ 00:06:33.137 START TEST accel_crc32c 00:06:33.137 ************************************ 00:06:33.137 18:36:43 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:33.137 18:36:43 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:33.137 18:36:43 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:33.137 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.137 18:36:43 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:33.137 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.137 18:36:43 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:33.137 18:36:43 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:33.138 18:36:43 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.138 18:36:43 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.138 18:36:43 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.138 18:36:43 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.138 18:36:43 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.138 18:36:43 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:33.138 18:36:43 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:33.396 [2024-07-20 18:36:43.467422] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:33.396 [2024-07-20 18:36:43.467488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265927 ] 00:06:33.396 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.396 [2024-07-20 18:36:43.529753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.396 [2024-07-20 18:36:43.622684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.396 18:36:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:34.768 18:36:44 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.768 00:06:34.768 real 0m1.411s 00:06:34.768 user 0m1.268s 00:06:34.768 sys 0m0.147s 00:06:34.768 18:36:44 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.768 18:36:44 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:34.768 ************************************ 00:06:34.768 END TEST accel_crc32c 00:06:34.768 ************************************ 00:06:34.768 18:36:44 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:34.768 18:36:44 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:34.768 18:36:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.768 18:36:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.768 ************************************ 00:06:34.768 START TEST accel_crc32c_C2 00:06:34.768 ************************************ 00:06:34.768 18:36:44 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:34.768 18:36:44 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.768 18:36:44 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:34.768 18:36:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.768 18:36:44 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:34.768 18:36:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.768 18:36:44 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:34.768 18:36:44 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.768 18:36:44 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.768 18:36:44 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.768 18:36:44 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.768 18:36:44 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.768 18:36:44 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.768 18:36:44 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.768 18:36:44 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:34.768 [2024-07-20 18:36:44.924652] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:34.768 [2024-07-20 18:36:44.924716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266078 ] 00:06:34.768 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.768 [2024-07-20 18:36:44.986946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.768 [2024-07-20 18:36:45.080045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.026 18:36:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.399 00:06:36.399 real 0m1.406s 00:06:36.399 user 0m1.257s 00:06:36.399 sys 0m0.152s 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.399 18:36:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:36.399 ************************************ 00:06:36.399 END TEST accel_crc32c_C2 00:06:36.399 ************************************ 00:06:36.399 18:36:46 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:36.399 18:36:46 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:36.399 18:36:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.399 18:36:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.399 ************************************ 00:06:36.399 START TEST accel_copy 00:06:36.399 ************************************ 00:06:36.399 18:36:46 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:36.399 [2024-07-20 18:36:46.377679] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:36.399 [2024-07-20 18:36:46.377741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266262 ] 00:06:36.399 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.399 [2024-07-20 18:36:46.439374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.399 [2024-07-20 18:36:46.531964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.399 18:36:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:37.771 18:36:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.771 00:06:37.771 real 0m1.384s 00:06:37.771 user 0m1.245s 00:06:37.771 sys 0m0.140s 00:06:37.771 18:36:47 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.771 18:36:47 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:37.771 ************************************ 00:06:37.771 END TEST accel_copy 00:06:37.771 ************************************ 00:06:37.771 18:36:47 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.771 18:36:47 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:37.771 18:36:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.771 18:36:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.771 ************************************ 00:06:37.771 START TEST accel_fill 00:06:37.771 ************************************ 00:06:37.771 18:36:47 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.771 18:36:47 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:37.771 18:36:47 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:37.771 18:36:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:47 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.771 18:36:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:47 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.771 18:36:47 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:37.771 18:36:47 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.771 18:36:47 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.771 18:36:47 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.771 18:36:47 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.771 18:36:47 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.771 18:36:47 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:37.771 18:36:47 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:37.771 [2024-07-20 18:36:47.804260] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:37.771 [2024-07-20 18:36:47.804332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266512 ] 00:06:37.771 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.771 [2024-07-20 18:36:47.867444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.771 [2024-07-20 18:36:47.958900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.771 18:36:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:39.216 18:36:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.216 00:06:39.216 real 0m1.405s 00:06:39.216 user 0m1.258s 00:06:39.216 sys 0m0.149s 00:06:39.216 18:36:49 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.216 18:36:49 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:39.216 ************************************ 00:06:39.216 END TEST accel_fill 00:06:39.216 ************************************ 00:06:39.216 18:36:49 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:39.216 18:36:49 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:39.216 18:36:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.216 18:36:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.216 ************************************ 00:06:39.216 START TEST accel_copy_crc32c 00:06:39.216 ************************************ 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:39.216 [2024-07-20 18:36:49.252690] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:39.216 [2024-07-20 18:36:49.252761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266666 ] 00:06:39.216 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.216 [2024-07-20 18:36:49.315181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.216 [2024-07-20 18:36:49.409920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.216 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.217 18:36:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.593 00:06:40.593 real 0m1.408s 00:06:40.593 user 0m1.266s 00:06:40.593 sys 0m0.144s 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.593 18:36:50 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:40.593 ************************************ 00:06:40.593 END TEST accel_copy_crc32c 00:06:40.593 ************************************ 00:06:40.593 18:36:50 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:40.593 18:36:50 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:40.593 18:36:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.593 18:36:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.593 ************************************ 00:06:40.593 START TEST accel_copy_crc32c_C2 00:06:40.593 ************************************ 00:06:40.593 18:36:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:40.593 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.593 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:40.593 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.593 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:40.593 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.593 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:40.593 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.593 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.593 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.593 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.593 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.593 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.593 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:40.593 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:40.593 [2024-07-20 18:36:50.704614] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:40.593 [2024-07-20 18:36:50.704678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266823 ] 00:06:40.593 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.593 [2024-07-20 18:36:50.767548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.593 [2024-07-20 18:36:50.860286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.852 18:36:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.784 00:06:41.784 real 0m1.402s 00:06:41.784 user 0m1.256s 00:06:41.784 sys 0m0.148s 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.784 18:36:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:41.784 ************************************ 00:06:41.784 END TEST accel_copy_crc32c_C2 00:06:41.784 ************************************ 00:06:42.043 18:36:52 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:42.043 18:36:52 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:42.043 18:36:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.043 18:36:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.043 ************************************ 00:06:42.043 START TEST accel_dualcast 00:06:42.043 ************************************ 00:06:42.043 18:36:52 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:42.043 18:36:52 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:42.043 18:36:52 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:42.043 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.043 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.043 18:36:52 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:42.043 18:36:52 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:42.043 18:36:52 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:42.043 18:36:52 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.043 18:36:52 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.043 18:36:52 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.043 18:36:52 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.043 18:36:52 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.043 18:36:52 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:42.043 18:36:52 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:42.043 [2024-07-20 18:36:52.152728] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:42.043 [2024-07-20 18:36:52.152787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1267059 ] 00:06:42.043 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.043 [2024-07-20 18:36:52.215729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.043 [2024-07-20 18:36:52.309724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.350 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.351 18:36:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:43.289 18:36:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.289 00:06:43.289 real 0m1.393s 00:06:43.289 user 0m1.261s 00:06:43.289 sys 0m0.134s 00:06:43.289 18:36:53 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.289 18:36:53 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:43.289 ************************************ 00:06:43.289 END TEST accel_dualcast 00:06:43.289 ************************************ 00:06:43.289 18:36:53 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:43.289 18:36:53 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:43.289 18:36:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.289 18:36:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.289 ************************************ 00:06:43.289 START TEST accel_compare 00:06:43.289 ************************************ 00:06:43.289 18:36:53 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:43.289 18:36:53 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:43.289 18:36:53 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:43.289 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.289 18:36:53 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:43.289 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.289 18:36:53 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:43.289 18:36:53 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:43.289 18:36:53 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.289 18:36:53 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.289 18:36:53 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.289 18:36:53 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.289 18:36:53 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.289 18:36:53 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:43.289 18:36:53 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:43.289 [2024-07-20 18:36:53.593292] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:43.289 [2024-07-20 18:36:53.593357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1267258 ] 00:06:43.548 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.548 [2024-07-20 18:36:53.657911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.548 [2024-07-20 18:36:53.749921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.548 18:36:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:44.919 18:36:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.919 00:06:44.919 real 0m1.408s 00:06:44.919 user 0m1.263s 00:06:44.919 sys 0m0.148s 00:06:44.919 18:36:54 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.919 18:36:54 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:44.919 ************************************ 00:06:44.919 END TEST accel_compare 00:06:44.919 ************************************ 00:06:44.919 18:36:55 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:44.919 18:36:55 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:44.919 18:36:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.919 18:36:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.919 ************************************ 00:06:44.919 START TEST accel_xor 00:06:44.919 ************************************ 00:06:44.919 18:36:55 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:44.919 18:36:55 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:44.919 18:36:55 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:44.919 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.919 18:36:55 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:44.919 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.919 18:36:55 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:44.919 18:36:55 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:44.919 18:36:55 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.919 18:36:55 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.919 18:36:55 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.919 18:36:55 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.919 18:36:55 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.919 18:36:55 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:44.919 18:36:55 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:44.919 [2024-07-20 18:36:55.048630] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:44.919 [2024-07-20 18:36:55.048698] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1267410 ] 00:06:44.919 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.919 [2024-07-20 18:36:55.111945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.919 [2024-07-20 18:36:55.204366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:45.177 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.178 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.178 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.178 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.178 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.178 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.178 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.178 18:36:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.178 18:36:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.178 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.178 18:36:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.549 00:06:46.549 real 0m1.410s 00:06:46.549 user 0m1.266s 00:06:46.549 sys 0m0.146s 00:06:46.549 18:36:56 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.549 18:36:56 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:46.549 ************************************ 00:06:46.549 END TEST accel_xor 00:06:46.549 ************************************ 00:06:46.549 18:36:56 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:46.549 18:36:56 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:46.549 18:36:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.549 18:36:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.549 ************************************ 00:06:46.549 START TEST accel_xor 00:06:46.549 ************************************ 00:06:46.549 18:36:56 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.549 18:36:56 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:46.550 [2024-07-20 18:36:56.503985] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:46.550 [2024-07-20 18:36:56.504043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1267569 ] 00:06:46.550 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.550 [2024-07-20 18:36:56.566460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.550 [2024-07-20 18:36:56.659051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 18:36:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:47.942 18:36:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.942 00:06:47.942 real 0m1.399s 00:06:47.942 user 0m1.260s 00:06:47.942 sys 0m0.142s 00:06:47.942 18:36:57 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.942 18:36:57 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:47.942 ************************************ 00:06:47.942 END TEST accel_xor 00:06:47.942 ************************************ 00:06:47.943 18:36:57 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:47.943 18:36:57 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:47.943 18:36:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.943 18:36:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.943 ************************************ 00:06:47.943 START TEST accel_dif_verify 00:06:47.943 ************************************ 00:06:47.943 18:36:57 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:47.943 18:36:57 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:47.943 18:36:57 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:47.943 18:36:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:57 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:47.943 18:36:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:57 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:47.943 18:36:57 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:47.943 18:36:57 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.943 18:36:57 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.943 18:36:57 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.943 18:36:57 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.943 18:36:57 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.943 18:36:57 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:47.943 18:36:57 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:47.943 [2024-07-20 18:36:57.945683] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:47.943 [2024-07-20 18:36:57.945746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1267835 ] 00:06:47.943 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.943 [2024-07-20 18:36:58.008342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.943 [2024-07-20 18:36:58.099037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.943 18:36:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:49.318 18:36:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.318 00:06:49.318 real 0m1.385s 00:06:49.318 user 0m1.247s 00:06:49.318 sys 0m0.142s 00:06:49.318 18:36:59 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.318 18:36:59 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:49.318 ************************************ 00:06:49.318 END TEST accel_dif_verify 00:06:49.318 ************************************ 00:06:49.318 18:36:59 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:49.318 18:36:59 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:49.318 18:36:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.318 18:36:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.318 ************************************ 00:06:49.318 START TEST accel_dif_generate 00:06:49.318 ************************************ 00:06:49.318 18:36:59 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:49.318 [2024-07-20 18:36:59.381141] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:49.318 [2024-07-20 18:36:59.381207] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1267998 ] 00:06:49.318 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.318 [2024-07-20 18:36:59.444972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.318 [2024-07-20 18:36:59.536804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:49.318 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.319 18:36:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:50.693 18:37:00 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.693 00:06:50.693 real 0m1.415s 00:06:50.693 user 0m1.270s 00:06:50.693 sys 0m0.148s 00:06:50.693 18:37:00 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.693 18:37:00 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:50.693 ************************************ 00:06:50.693 END TEST accel_dif_generate 00:06:50.693 ************************************ 00:06:50.693 18:37:00 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:50.693 18:37:00 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:50.693 18:37:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.693 18:37:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.693 ************************************ 00:06:50.693 START TEST accel_dif_generate_copy 00:06:50.693 ************************************ 00:06:50.693 18:37:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:50.693 18:37:00 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:50.693 18:37:00 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:50.693 18:37:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.693 18:37:00 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:50.693 18:37:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.693 18:37:00 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:50.693 18:37:00 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:50.693 18:37:00 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.693 18:37:00 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.693 18:37:00 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.693 18:37:00 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.693 18:37:00 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.693 18:37:00 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:50.693 18:37:00 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:50.693 [2024-07-20 18:37:00.838264] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:50.693 [2024-07-20 18:37:00.838325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1268155 ] 00:06:50.693 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.693 [2024-07-20 18:37:00.901151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.693 [2024-07-20 18:37:00.994509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.951 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.951 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.951 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.952 18:37:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.325 00:06:52.325 real 0m1.411s 00:06:52.325 user 0m1.263s 00:06:52.325 sys 0m0.151s 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.325 18:37:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:52.325 ************************************ 00:06:52.325 END TEST accel_dif_generate_copy 00:06:52.325 ************************************ 00:06:52.325 18:37:02 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:52.325 18:37:02 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.325 18:37:02 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:52.325 18:37:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.325 18:37:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.325 ************************************ 00:06:52.325 START TEST accel_comp 00:06:52.325 ************************************ 00:06:52.325 18:37:02 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:52.325 [2024-07-20 18:37:02.294782] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:52.325 [2024-07-20 18:37:02.294883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1268320 ] 00:06:52.325 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.325 [2024-07-20 18:37:02.357002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.325 [2024-07-20 18:37:02.448734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.325 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.326 18:37:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.695 18:37:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.695 18:37:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.695 18:37:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.695 18:37:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.695 18:37:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.695 18:37:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.695 18:37:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.695 18:37:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.695 18:37:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.695 18:37:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.695 18:37:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:53.696 18:37:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.696 00:06:53.696 real 0m1.411s 00:06:53.696 user 0m1.265s 00:06:53.696 sys 0m0.150s 00:06:53.696 18:37:03 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.696 18:37:03 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:53.696 ************************************ 00:06:53.696 END TEST accel_comp 00:06:53.696 ************************************ 00:06:53.696 18:37:03 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.696 18:37:03 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:53.696 18:37:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.696 18:37:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.696 ************************************ 00:06:53.696 START TEST accel_decomp 00:06:53.696 ************************************ 00:06:53.696 18:37:03 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:53.696 [2024-07-20 18:37:03.747024] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:53.696 [2024-07-20 18:37:03.747097] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1268582 ] 00:06:53.696 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.696 [2024-07-20 18:37:03.809195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.696 [2024-07-20 18:37:03.899960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.696 18:37:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:55.068 18:37:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.068 00:06:55.068 real 0m1.397s 00:06:55.068 user 0m1.255s 00:06:55.068 sys 0m0.146s 00:06:55.068 18:37:05 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.068 18:37:05 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:55.068 ************************************ 00:06:55.068 END TEST accel_decomp 00:06:55.068 ************************************ 00:06:55.068 18:37:05 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:55.068 18:37:05 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:55.068 18:37:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.068 18:37:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.068 ************************************ 00:06:55.068 START TEST accel_decmop_full 00:06:55.068 ************************************ 00:06:55.068 18:37:05 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:55.068 18:37:05 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:55.068 18:37:05 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:55.068 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.068 18:37:05 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:55.068 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.068 18:37:05 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:55.068 18:37:05 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:55.068 18:37:05 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.068 18:37:05 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.068 18:37:05 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.068 18:37:05 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.068 18:37:05 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.068 18:37:05 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:55.068 18:37:05 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:55.068 [2024-07-20 18:37:05.191624] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:55.068 [2024-07-20 18:37:05.191688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1268743 ] 00:06:55.068 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.068 [2024-07-20 18:37:05.255871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.068 [2024-07-20 18:37:05.348517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.326 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.327 18:37:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:56.696 18:37:06 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.696 00:06:56.696 real 0m1.432s 00:06:56.696 user 0m1.289s 00:06:56.696 sys 0m0.146s 00:06:56.696 18:37:06 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.696 18:37:06 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:56.696 ************************************ 00:06:56.696 END TEST accel_decmop_full 00:06:56.696 ************************************ 00:06:56.696 18:37:06 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.696 18:37:06 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:56.696 18:37:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.696 18:37:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.696 ************************************ 00:06:56.696 START TEST accel_decomp_mcore 00:06:56.696 ************************************ 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:56.696 [2024-07-20 18:37:06.672650] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:56.696 [2024-07-20 18:37:06.672712] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1268900 ] 00:06:56.696 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.696 [2024-07-20 18:37:06.736616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.696 [2024-07-20 18:37:06.830389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.696 [2024-07-20 18:37:06.830467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.696 [2024-07-20 18:37:06.830562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.696 [2024-07-20 18:37:06.830559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.696 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 18:37:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.069 00:06:58.069 real 0m1.408s 00:06:58.069 user 0m4.686s 00:06:58.069 sys 0m0.153s 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.069 18:37:08 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:58.069 ************************************ 00:06:58.069 END TEST accel_decomp_mcore 00:06:58.069 ************************************ 00:06:58.069 18:37:08 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.069 18:37:08 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:58.069 18:37:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.069 18:37:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.069 ************************************ 00:06:58.069 START TEST accel_decomp_full_mcore 00:06:58.069 ************************************ 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:58.069 [2024-07-20 18:37:08.130296] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:58.069 [2024-07-20 18:37:08.130358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1269171 ] 00:06:58.069 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.069 [2024-07-20 18:37:08.193637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.069 [2024-07-20 18:37:08.289619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.069 [2024-07-20 18:37:08.289689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.069 [2024-07-20 18:37:08.289781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.069 [2024-07-20 18:37:08.289783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.069 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.070 18:37:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.442 00:06:59.442 real 0m1.421s 00:06:59.442 user 0m4.719s 00:06:59.442 sys 0m0.168s 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.442 18:37:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:59.442 ************************************ 00:06:59.442 END TEST accel_decomp_full_mcore 00:06:59.442 ************************************ 00:06:59.442 18:37:09 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.442 18:37:09 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:59.442 18:37:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.442 18:37:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.442 ************************************ 00:06:59.442 START TEST accel_decomp_mthread 00:06:59.442 ************************************ 00:06:59.442 18:37:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.442 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:59.442 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:59.442 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.442 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.442 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.442 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.442 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:59.442 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.442 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.442 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.442 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.442 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.442 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:59.442 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:59.442 [2024-07-20 18:37:09.598384] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:59.442 [2024-07-20 18:37:09.598448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1269334 ] 00:06:59.442 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.442 [2024-07-20 18:37:09.657833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.442 [2024-07-20 18:37:09.748332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.700 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.700 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.700 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.700 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.700 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.700 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.700 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.700 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.700 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.700 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.700 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.701 18:37:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.070 00:07:01.070 real 0m1.410s 00:07:01.070 user 0m1.271s 00:07:01.070 sys 0m0.142s 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.070 18:37:10 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:01.070 ************************************ 00:07:01.070 END TEST accel_decomp_mthread 00:07:01.070 ************************************ 00:07:01.070 18:37:11 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.070 18:37:11 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:01.070 18:37:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.070 18:37:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.070 ************************************ 00:07:01.070 START TEST accel_decomp_full_mthread 00:07:01.070 ************************************ 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:01.070 [2024-07-20 18:37:11.056526] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:01.070 [2024-07-20 18:37:11.056588] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1269494 ] 00:07:01.070 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.070 [2024-07-20 18:37:11.119175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.070 [2024-07-20 18:37:11.211715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.070 18:37:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.455 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.456 00:07:02.456 real 0m1.448s 00:07:02.456 user 0m1.300s 00:07:02.456 sys 0m0.152s 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.456 18:37:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:02.456 ************************************ 00:07:02.456 END TEST accel_decomp_full_mthread 00:07:02.456 ************************************ 00:07:02.456 18:37:12 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:02.456 18:37:12 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:02.456 18:37:12 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:02.456 18:37:12 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:02.456 18:37:12 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.456 18:37:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.456 18:37:12 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.456 18:37:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.456 18:37:12 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.456 18:37:12 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.456 18:37:12 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.456 18:37:12 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:02.456 18:37:12 accel -- accel/accel.sh@41 -- # jq -r . 00:07:02.456 ************************************ 00:07:02.456 START TEST accel_dif_functional_tests 00:07:02.456 ************************************ 00:07:02.456 18:37:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:02.456 [2024-07-20 18:37:12.574871] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:02.456 [2024-07-20 18:37:12.574929] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1269652 ] 00:07:02.456 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.456 [2024-07-20 18:37:12.638102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.456 [2024-07-20 18:37:12.732849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.456 [2024-07-20 18:37:12.732906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.456 [2024-07-20 18:37:12.732910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.713 00:07:02.713 00:07:02.713 CUnit - A unit testing framework for C - Version 2.1-3 00:07:02.713 http://cunit.sourceforge.net/ 00:07:02.713 00:07:02.713 00:07:02.713 Suite: accel_dif 00:07:02.714 Test: verify: DIF generated, GUARD check ...passed 00:07:02.714 Test: verify: DIF generated, APPTAG check ...passed 00:07:02.714 Test: verify: DIF generated, REFTAG check ...passed 00:07:02.714 Test: verify: DIF not generated, GUARD check ...[2024-07-20 18:37:12.825915] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:02.714 passed 00:07:02.714 Test: verify: DIF not generated, APPTAG check ...[2024-07-20 18:37:12.825985] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:02.714 passed 00:07:02.714 Test: verify: DIF not generated, REFTAG check ...[2024-07-20 18:37:12.826017] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:02.714 passed 00:07:02.714 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:02.714 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-20 18:37:12.826078] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:02.714 passed 00:07:02.714 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:02.714 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:02.714 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:02.714 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-20 18:37:12.826208] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:02.714 passed 00:07:02.714 Test: verify copy: DIF generated, GUARD check ...passed 00:07:02.714 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:02.714 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:02.714 Test: verify copy: DIF not generated, GUARD check ...[2024-07-20 18:37:12.826370] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:02.714 passed 00:07:02.714 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-20 18:37:12.826404] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:02.714 passed 00:07:02.714 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-20 18:37:12.826437] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:02.714 passed 00:07:02.714 Test: generate copy: DIF generated, GUARD check ...passed 00:07:02.714 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:02.714 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:02.714 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:02.714 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:02.714 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:02.714 Test: generate copy: iovecs-len validate ...[2024-07-20 18:37:12.826646] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:02.714 passed 00:07:02.714 Test: generate copy: buffer alignment validate ...passed 00:07:02.714 00:07:02.714 Run Summary: Type Total Ran Passed Failed Inactive 00:07:02.714 suites 1 1 n/a 0 0 00:07:02.714 tests 26 26 26 0 0 00:07:02.714 asserts 115 115 115 0 n/a 00:07:02.714 00:07:02.714 Elapsed time = 0.002 seconds 00:07:02.714 00:07:02.714 real 0m0.498s 00:07:02.714 user 0m0.766s 00:07:02.714 sys 0m0.180s 00:07:02.714 18:37:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.714 18:37:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:02.714 ************************************ 00:07:02.714 END TEST accel_dif_functional_tests 00:07:02.714 ************************************ 00:07:02.972 00:07:02.972 real 0m31.704s 00:07:02.972 user 0m35.020s 00:07:02.972 sys 0m4.629s 00:07:02.972 18:37:13 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.972 18:37:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.972 ************************************ 00:07:02.972 END TEST accel 00:07:02.972 ************************************ 00:07:02.972 18:37:13 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:02.972 18:37:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:02.972 18:37:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.972 18:37:13 -- common/autotest_common.sh@10 -- # set +x 00:07:02.972 ************************************ 00:07:02.972 START TEST accel_rpc 00:07:02.972 ************************************ 00:07:02.972 18:37:13 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:02.972 * Looking for test storage... 00:07:02.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:02.972 18:37:13 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:02.972 18:37:13 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1269838 00:07:02.972 18:37:13 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:02.972 18:37:13 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1269838 00:07:02.972 18:37:13 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 1269838 ']' 00:07:02.972 18:37:13 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.972 18:37:13 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:02.972 18:37:13 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.972 18:37:13 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:02.972 18:37:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.972 [2024-07-20 18:37:13.196971] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:02.972 [2024-07-20 18:37:13.197048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1269838 ] 00:07:02.972 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.972 [2024-07-20 18:37:13.255227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.229 [2024-07-20 18:37:13.346203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.229 18:37:13 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:03.229 18:37:13 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:03.229 18:37:13 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:03.229 18:37:13 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:03.229 18:37:13 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:03.229 18:37:13 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:03.229 18:37:13 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:03.229 18:37:13 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.229 18:37:13 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.229 18:37:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.229 ************************************ 00:07:03.229 START TEST accel_assign_opcode 00:07:03.229 ************************************ 00:07:03.229 18:37:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:03.229 18:37:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:03.229 18:37:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.229 18:37:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:03.229 [2024-07-20 18:37:13.434899] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:03.229 18:37:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.229 18:37:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:03.229 18:37:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.229 18:37:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:03.229 [2024-07-20 18:37:13.442897] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:03.229 18:37:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.229 18:37:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:03.229 18:37:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.229 18:37:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:03.486 18:37:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.486 18:37:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:03.486 18:37:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.486 18:37:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:03.486 18:37:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:03.486 18:37:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:03.486 18:37:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.486 software 00:07:03.486 00:07:03.486 real 0m0.294s 00:07:03.486 user 0m0.040s 00:07:03.486 sys 0m0.003s 00:07:03.486 18:37:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.486 18:37:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:03.486 ************************************ 00:07:03.486 END TEST accel_assign_opcode 00:07:03.486 ************************************ 00:07:03.486 18:37:13 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1269838 00:07:03.486 18:37:13 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 1269838 ']' 00:07:03.486 18:37:13 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 1269838 00:07:03.486 18:37:13 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:03.486 18:37:13 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:03.486 18:37:13 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1269838 00:07:03.486 18:37:13 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:03.486 18:37:13 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:03.486 18:37:13 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1269838' 00:07:03.486 killing process with pid 1269838 00:07:03.486 18:37:13 accel_rpc -- common/autotest_common.sh@965 -- # kill 1269838 00:07:03.486 18:37:13 accel_rpc -- common/autotest_common.sh@970 -- # wait 1269838 00:07:04.050 00:07:04.050 real 0m1.076s 00:07:04.050 user 0m1.006s 00:07:04.050 sys 0m0.423s 00:07:04.050 18:37:14 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.050 18:37:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.050 ************************************ 00:07:04.050 END TEST accel_rpc 00:07:04.050 ************************************ 00:07:04.050 18:37:14 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:04.050 18:37:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:04.050 18:37:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.050 18:37:14 -- common/autotest_common.sh@10 -- # set +x 00:07:04.050 ************************************ 00:07:04.050 START TEST app_cmdline 00:07:04.050 ************************************ 00:07:04.050 18:37:14 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:04.050 * Looking for test storage... 00:07:04.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:04.050 18:37:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:04.050 18:37:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1270044 00:07:04.050 18:37:14 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:04.050 18:37:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1270044 00:07:04.050 18:37:14 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 1270044 ']' 00:07:04.050 18:37:14 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.050 18:37:14 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:04.050 18:37:14 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.050 18:37:14 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:04.050 18:37:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.050 [2024-07-20 18:37:14.322172] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:04.050 [2024-07-20 18:37:14.322266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1270044 ] 00:07:04.050 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.307 [2024-07-20 18:37:14.384543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.308 [2024-07-20 18:37:14.475253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.565 18:37:14 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.565 18:37:14 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:04.565 18:37:14 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:04.823 { 00:07:04.823 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:07:04.823 "fields": { 00:07:04.823 "major": 24, 00:07:04.823 "minor": 5, 00:07:04.823 "patch": 1, 00:07:04.823 "suffix": "-pre", 00:07:04.823 "commit": "5fa2f5086" 00:07:04.823 } 00:07:04.823 } 00:07:04.823 18:37:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:04.823 18:37:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:04.823 18:37:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:04.823 18:37:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:04.823 18:37:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:04.823 18:37:14 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.823 18:37:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.823 18:37:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:04.823 18:37:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:04.823 18:37:14 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.823 18:37:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:04.823 18:37:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:04.823 18:37:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.823 18:37:15 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:04.823 18:37:15 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.823 18:37:15 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.823 18:37:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.823 18:37:15 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.823 18:37:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.823 18:37:15 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.823 18:37:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.823 18:37:15 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.823 18:37:15 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:04.823 18:37:15 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.080 request: 00:07:05.080 { 00:07:05.080 "method": "env_dpdk_get_mem_stats", 00:07:05.080 "req_id": 1 00:07:05.080 } 00:07:05.080 Got JSON-RPC error response 00:07:05.080 response: 00:07:05.080 { 00:07:05.080 "code": -32601, 00:07:05.080 "message": "Method not found" 00:07:05.080 } 00:07:05.080 18:37:15 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:05.080 18:37:15 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.080 18:37:15 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.080 18:37:15 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.080 18:37:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1270044 00:07:05.080 18:37:15 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 1270044 ']' 00:07:05.080 18:37:15 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 1270044 00:07:05.080 18:37:15 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:05.080 18:37:15 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:05.080 18:37:15 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1270044 00:07:05.080 18:37:15 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:05.081 18:37:15 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:05.081 18:37:15 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1270044' 00:07:05.081 killing process with pid 1270044 00:07:05.081 18:37:15 app_cmdline -- common/autotest_common.sh@965 -- # kill 1270044 00:07:05.081 18:37:15 app_cmdline -- common/autotest_common.sh@970 -- # wait 1270044 00:07:05.647 00:07:05.647 real 0m1.447s 00:07:05.647 user 0m1.748s 00:07:05.647 sys 0m0.451s 00:07:05.647 18:37:15 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.647 18:37:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.647 ************************************ 00:07:05.647 END TEST app_cmdline 00:07:05.647 ************************************ 00:07:05.647 18:37:15 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:05.647 18:37:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:05.647 18:37:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.647 18:37:15 -- common/autotest_common.sh@10 -- # set +x 00:07:05.647 ************************************ 00:07:05.647 START TEST version 00:07:05.647 ************************************ 00:07:05.647 18:37:15 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:05.647 * Looking for test storage... 00:07:05.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:05.647 18:37:15 version -- app/version.sh@17 -- # get_header_version major 00:07:05.647 18:37:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.647 18:37:15 version -- app/version.sh@14 -- # cut -f2 00:07:05.647 18:37:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.647 18:37:15 version -- app/version.sh@17 -- # major=24 00:07:05.647 18:37:15 version -- app/version.sh@18 -- # get_header_version minor 00:07:05.647 18:37:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.647 18:37:15 version -- app/version.sh@14 -- # cut -f2 00:07:05.647 18:37:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.647 18:37:15 version -- app/version.sh@18 -- # minor=5 00:07:05.647 18:37:15 version -- app/version.sh@19 -- # get_header_version patch 00:07:05.647 18:37:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.647 18:37:15 version -- app/version.sh@14 -- # cut -f2 00:07:05.647 18:37:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.647 18:37:15 version -- app/version.sh@19 -- # patch=1 00:07:05.647 18:37:15 version -- app/version.sh@20 -- # get_header_version suffix 00:07:05.647 18:37:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.647 18:37:15 version -- app/version.sh@14 -- # cut -f2 00:07:05.647 18:37:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.647 18:37:15 version -- app/version.sh@20 -- # suffix=-pre 00:07:05.647 18:37:15 version -- app/version.sh@22 -- # version=24.5 00:07:05.647 18:37:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:05.647 18:37:15 version -- app/version.sh@25 -- # version=24.5.1 00:07:05.647 18:37:15 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:05.647 18:37:15 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:05.647 18:37:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:05.647 18:37:15 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:05.647 18:37:15 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:05.647 00:07:05.647 real 0m0.109s 00:07:05.647 user 0m0.067s 00:07:05.647 sys 0m0.063s 00:07:05.647 18:37:15 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.647 18:37:15 version -- common/autotest_common.sh@10 -- # set +x 00:07:05.647 ************************************ 00:07:05.647 END TEST version 00:07:05.647 ************************************ 00:07:05.647 18:37:15 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:05.647 18:37:15 -- spdk/autotest.sh@198 -- # uname -s 00:07:05.647 18:37:15 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:05.647 18:37:15 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:05.647 18:37:15 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:05.647 18:37:15 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:05.647 18:37:15 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:05.647 18:37:15 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:05.647 18:37:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.647 18:37:15 -- common/autotest_common.sh@10 -- # set +x 00:07:05.647 18:37:15 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:05.647 18:37:15 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:05.647 18:37:15 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:05.647 18:37:15 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:05.647 18:37:15 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:05.647 18:37:15 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:05.647 18:37:15 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:05.647 18:37:15 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:05.647 18:37:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.647 18:37:15 -- common/autotest_common.sh@10 -- # set +x 00:07:05.647 ************************************ 00:07:05.647 START TEST nvmf_tcp 00:07:05.647 ************************************ 00:07:05.647 18:37:15 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:05.647 * Looking for test storage... 00:07:05.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.647 18:37:15 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.647 18:37:15 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.647 18:37:15 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.647 18:37:15 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.647 18:37:15 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.647 18:37:15 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.647 18:37:15 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:05.647 18:37:15 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.647 18:37:15 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.648 18:37:15 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.648 18:37:15 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.648 18:37:15 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:05.648 18:37:15 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:05.648 18:37:15 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:05.648 18:37:15 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:05.648 18:37:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.648 18:37:15 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:05.648 18:37:15 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:05.648 18:37:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:05.648 18:37:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.648 18:37:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.906 ************************************ 00:07:05.906 START TEST nvmf_example 00:07:05.906 ************************************ 00:07:05.906 18:37:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:05.906 * Looking for test storage... 00:07:05.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.906 18:37:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.907 18:37:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:07.809 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:07.809 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:07.809 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:07.809 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.809 18:37:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.809 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.809 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.809 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:07.809 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.809 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.809 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.809 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:07.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:07:07.809 00:07:07.809 --- 10.0.0.2 ping statistics --- 00:07:07.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.809 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:07:07.809 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:07:07.809 00:07:07.809 --- 10.0.0.1 ping statistics --- 00:07:07.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.809 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:07:07.809 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.809 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:07.809 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:07.809 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.809 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1271943 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1271943 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 1271943 ']' 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:07.810 18:37:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.067 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:08.999 18:37:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:08.999 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.220 Initializing NVMe Controllers 00:07:21.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:21.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:21.220 Initialization complete. Launching workers. 00:07:21.220 ======================================================== 00:07:21.220 Latency(us) 00:07:21.220 Device Information : IOPS MiB/s Average min max 00:07:21.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12282.31 47.98 5210.67 911.06 15273.36 00:07:21.220 ======================================================== 00:07:21.220 Total : 12282.31 47.98 5210.67 911.06 15273.36 00:07:21.220 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:21.220 rmmod nvme_tcp 00:07:21.220 rmmod nvme_fabrics 00:07:21.220 rmmod nvme_keyring 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1271943 ']' 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1271943 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 1271943 ']' 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 1271943 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1271943 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1271943' 00:07:21.220 killing process with pid 1271943 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 1271943 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 1271943 00:07:21.220 nvmf threads initialize successfully 00:07:21.220 bdev subsystem init successfully 00:07:21.220 created a nvmf target service 00:07:21.220 create targets's poll groups done 00:07:21.220 all subsystems of target started 00:07:21.220 nvmf target is running 00:07:21.220 all subsystems of target stopped 00:07:21.220 destroy targets's poll groups done 00:07:21.220 destroyed the nvmf target service 00:07:21.220 bdev subsystem finish successfully 00:07:21.220 nvmf threads destroy successfully 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.220 18:37:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.478 18:37:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:21.478 18:37:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:21.478 18:37:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.478 18:37:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.478 00:07:21.478 real 0m15.789s 00:07:21.478 user 0m45.037s 00:07:21.478 sys 0m3.138s 00:07:21.478 18:37:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.478 18:37:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.478 ************************************ 00:07:21.478 END TEST nvmf_example 00:07:21.478 ************************************ 00:07:21.740 18:37:31 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:21.740 18:37:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:21.740 18:37:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.740 18:37:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.740 ************************************ 00:07:21.740 START TEST nvmf_filesystem 00:07:21.740 ************************************ 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:21.740 * Looking for test storage... 00:07:21.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:21.740 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:21.741 #define SPDK_CONFIG_H 00:07:21.741 #define SPDK_CONFIG_APPS 1 00:07:21.741 #define SPDK_CONFIG_ARCH native 00:07:21.741 #undef SPDK_CONFIG_ASAN 00:07:21.741 #undef SPDK_CONFIG_AVAHI 00:07:21.741 #undef SPDK_CONFIG_CET 00:07:21.741 #define SPDK_CONFIG_COVERAGE 1 00:07:21.741 #define SPDK_CONFIG_CROSS_PREFIX 00:07:21.741 #undef SPDK_CONFIG_CRYPTO 00:07:21.741 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:21.741 #undef SPDK_CONFIG_CUSTOMOCF 00:07:21.741 #undef SPDK_CONFIG_DAOS 00:07:21.741 #define SPDK_CONFIG_DAOS_DIR 00:07:21.741 #define SPDK_CONFIG_DEBUG 1 00:07:21.741 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:21.741 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:21.741 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:21.741 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:21.741 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:21.741 #undef SPDK_CONFIG_DPDK_UADK 00:07:21.741 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:21.741 #define SPDK_CONFIG_EXAMPLES 1 00:07:21.741 #undef SPDK_CONFIG_FC 00:07:21.741 #define SPDK_CONFIG_FC_PATH 00:07:21.741 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:21.741 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:21.741 #undef SPDK_CONFIG_FUSE 00:07:21.741 #undef SPDK_CONFIG_FUZZER 00:07:21.741 #define SPDK_CONFIG_FUZZER_LIB 00:07:21.741 #undef SPDK_CONFIG_GOLANG 00:07:21.741 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:21.741 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:21.741 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:21.741 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:21.741 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:21.741 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:21.741 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:21.741 #define SPDK_CONFIG_IDXD 1 00:07:21.741 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:21.741 #undef SPDK_CONFIG_IPSEC_MB 00:07:21.741 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:21.741 #define SPDK_CONFIG_ISAL 1 00:07:21.741 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:21.741 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:21.741 #define SPDK_CONFIG_LIBDIR 00:07:21.741 #undef SPDK_CONFIG_LTO 00:07:21.741 #define SPDK_CONFIG_MAX_LCORES 00:07:21.741 #define SPDK_CONFIG_NVME_CUSE 1 00:07:21.741 #undef SPDK_CONFIG_OCF 00:07:21.741 #define SPDK_CONFIG_OCF_PATH 00:07:21.741 #define SPDK_CONFIG_OPENSSL_PATH 00:07:21.741 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:21.741 #define SPDK_CONFIG_PGO_DIR 00:07:21.741 #undef SPDK_CONFIG_PGO_USE 00:07:21.741 #define SPDK_CONFIG_PREFIX /usr/local 00:07:21.741 #undef SPDK_CONFIG_RAID5F 00:07:21.741 #undef SPDK_CONFIG_RBD 00:07:21.741 #define SPDK_CONFIG_RDMA 1 00:07:21.741 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:21.741 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:21.741 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:21.741 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:21.741 #define SPDK_CONFIG_SHARED 1 00:07:21.741 #undef SPDK_CONFIG_SMA 00:07:21.741 #define SPDK_CONFIG_TESTS 1 00:07:21.741 #undef SPDK_CONFIG_TSAN 00:07:21.741 #define SPDK_CONFIG_UBLK 1 00:07:21.741 #define SPDK_CONFIG_UBSAN 1 00:07:21.741 #undef SPDK_CONFIG_UNIT_TESTS 00:07:21.741 #undef SPDK_CONFIG_URING 00:07:21.741 #define SPDK_CONFIG_URING_PATH 00:07:21.741 #undef SPDK_CONFIG_URING_ZNS 00:07:21.741 #undef SPDK_CONFIG_USDT 00:07:21.741 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:21.741 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:21.741 #define SPDK_CONFIG_VFIO_USER 1 00:07:21.741 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:21.741 #define SPDK_CONFIG_VHOST 1 00:07:21.741 #define SPDK_CONFIG_VIRTIO 1 00:07:21.741 #undef SPDK_CONFIG_VTUNE 00:07:21.741 #define SPDK_CONFIG_VTUNE_DIR 00:07:21.741 #define SPDK_CONFIG_WERROR 1 00:07:21.741 #define SPDK_CONFIG_WPDK_DIR 00:07:21.741 #undef SPDK_CONFIG_XNVME 00:07:21.741 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.741 18:37:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v23.11 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:21.742 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 1273654 ]] 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 1273654 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.MEhcE1 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.MEhcE1/tests/target /tmp/spdk.MEhcE1 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:21.743 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=953643008 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4330786816 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=52891742208 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994721280 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9102979072 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30941724672 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997360640 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55635968 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390182912 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398944256 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8761344 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30995230720 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997360640 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=2129920 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199468032 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199472128 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:21.744 * Looking for test storage... 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=52891742208 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=11317571584 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.744 18:37:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:21.745 18:37:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:23.645 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:23.645 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:23.645 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:23.645 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:23.645 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.903 18:37:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:23.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:07:23.903 00:07:23.903 --- 10.0.0.2 ping statistics --- 00:07:23.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.903 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:07:23.903 00:07:23.903 --- 10.0.0.1 ping statistics --- 00:07:23.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.903 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.903 ************************************ 00:07:23.903 START TEST nvmf_filesystem_no_in_capsule 00:07:23.903 ************************************ 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1275278 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1275278 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 1275278 ']' 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:23.903 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.903 [2024-07-20 18:37:34.115877] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:23.903 [2024-07-20 18:37:34.115960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.903 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.903 [2024-07-20 18:37:34.190001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.161 [2024-07-20 18:37:34.286751] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.161 [2024-07-20 18:37:34.286818] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.161 [2024-07-20 18:37:34.286836] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.161 [2024-07-20 18:37:34.286850] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.161 [2024-07-20 18:37:34.286862] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.161 [2024-07-20 18:37:34.286940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.161 [2024-07-20 18:37:34.286995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.161 [2024-07-20 18:37:34.287033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.161 [2024-07-20 18:37:34.287038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.161 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:24.161 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:24.161 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:24.161 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:24.161 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.161 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.161 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:24.161 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:24.161 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.161 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.161 [2024-07-20 18:37:34.437596] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.161 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.161 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:24.161 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.161 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.418 Malloc1 00:07:24.418 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.418 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:24.418 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.418 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.418 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.418 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:24.418 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.418 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.418 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.418 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.419 [2024-07-20 18:37:34.622115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:24.419 { 00:07:24.419 "name": "Malloc1", 00:07:24.419 "aliases": [ 00:07:24.419 "df4fc78e-9713-4cfb-b138-e503e0a61e45" 00:07:24.419 ], 00:07:24.419 "product_name": "Malloc disk", 00:07:24.419 "block_size": 512, 00:07:24.419 "num_blocks": 1048576, 00:07:24.419 "uuid": "df4fc78e-9713-4cfb-b138-e503e0a61e45", 00:07:24.419 "assigned_rate_limits": { 00:07:24.419 "rw_ios_per_sec": 0, 00:07:24.419 "rw_mbytes_per_sec": 0, 00:07:24.419 "r_mbytes_per_sec": 0, 00:07:24.419 "w_mbytes_per_sec": 0 00:07:24.419 }, 00:07:24.419 "claimed": true, 00:07:24.419 "claim_type": "exclusive_write", 00:07:24.419 "zoned": false, 00:07:24.419 "supported_io_types": { 00:07:24.419 "read": true, 00:07:24.419 "write": true, 00:07:24.419 "unmap": true, 00:07:24.419 "write_zeroes": true, 00:07:24.419 "flush": true, 00:07:24.419 "reset": true, 00:07:24.419 "compare": false, 00:07:24.419 "compare_and_write": false, 00:07:24.419 "abort": true, 00:07:24.419 "nvme_admin": false, 00:07:24.419 "nvme_io": false 00:07:24.419 }, 00:07:24.419 "memory_domains": [ 00:07:24.419 { 00:07:24.419 "dma_device_id": "system", 00:07:24.419 "dma_device_type": 1 00:07:24.419 }, 00:07:24.419 { 00:07:24.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.419 "dma_device_type": 2 00:07:24.419 } 00:07:24.419 ], 00:07:24.419 "driver_specific": {} 00:07:24.419 } 00:07:24.419 ]' 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:24.419 18:37:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:25.358 18:37:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:25.358 18:37:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:25.358 18:37:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:25.358 18:37:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:25.358 18:37:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:27.252 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:27.816 18:37:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:28.382 18:37:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.315 ************************************ 00:07:29.315 START TEST filesystem_ext4 00:07:29.315 ************************************ 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:29.315 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:29.315 mke2fs 1.46.5 (30-Dec-2021) 00:07:29.315 Discarding device blocks: 0/522240 done 00:07:29.315 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:29.315 Filesystem UUID: ab40c3d5-b2f2-4536-9fbc-eda524c78f4c 00:07:29.315 Superblock backups stored on blocks: 00:07:29.315 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:29.315 00:07:29.315 Allocating group tables: 0/64 done 00:07:29.315 Writing inode tables: 0/64 done 00:07:29.573 Creating journal (8192 blocks): done 00:07:29.831 Writing superblocks and filesystem accounting information: 0/64 done 00:07:29.831 00:07:29.831 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:29.831 18:37:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:29.831 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1275278 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.090 00:07:30.090 real 0m0.778s 00:07:30.090 user 0m0.016s 00:07:30.090 sys 0m0.036s 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:30.090 ************************************ 00:07:30.090 END TEST filesystem_ext4 00:07:30.090 ************************************ 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.090 ************************************ 00:07:30.090 START TEST filesystem_btrfs 00:07:30.090 ************************************ 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:30.090 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:30.091 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:30.657 btrfs-progs v6.6.2 00:07:30.657 See https://btrfs.readthedocs.io for more information. 00:07:30.657 00:07:30.657 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:30.657 NOTE: several default settings have changed in version 5.15, please make sure 00:07:30.657 this does not affect your deployments: 00:07:30.657 - DUP for metadata (-m dup) 00:07:30.657 - enabled no-holes (-O no-holes) 00:07:30.657 - enabled free-space-tree (-R free-space-tree) 00:07:30.657 00:07:30.657 Label: (null) 00:07:30.657 UUID: 151e4830-b815-49c2-b5b1-71ec726d910e 00:07:30.657 Node size: 16384 00:07:30.657 Sector size: 4096 00:07:30.657 Filesystem size: 510.00MiB 00:07:30.657 Block group profiles: 00:07:30.657 Data: single 8.00MiB 00:07:30.657 Metadata: DUP 32.00MiB 00:07:30.657 System: DUP 8.00MiB 00:07:30.657 SSD detected: yes 00:07:30.657 Zoned device: no 00:07:30.657 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:30.657 Runtime features: free-space-tree 00:07:30.657 Checksum: crc32c 00:07:30.657 Number of devices: 1 00:07:30.657 Devices: 00:07:30.657 ID SIZE PATH 00:07:30.657 1 510.00MiB /dev/nvme0n1p1 00:07:30.657 00:07:30.657 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:30.657 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:30.915 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.915 18:37:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1275278 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.915 00:07:30.915 real 0m0.782s 00:07:30.915 user 0m0.014s 00:07:30.915 sys 0m0.049s 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:30.915 ************************************ 00:07:30.915 END TEST filesystem_btrfs 00:07:30.915 ************************************ 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.915 ************************************ 00:07:30.915 START TEST filesystem_xfs 00:07:30.915 ************************************ 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:30.915 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:30.916 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:30.916 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:30.916 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:30.916 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:30.916 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:30.916 = sectsz=512 attr=2, projid32bit=1 00:07:30.916 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:30.916 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:30.916 data = bsize=4096 blocks=130560, imaxpct=25 00:07:30.916 = sunit=0 swidth=0 blks 00:07:30.916 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:30.916 log =internal log bsize=4096 blocks=16384, version=2 00:07:30.916 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:30.916 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:31.848 Discarding blocks...Done. 00:07:31.849 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:31.849 18:37:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:33.784 18:37:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:33.784 18:37:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:33.784 18:37:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:33.784 18:37:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:33.784 18:37:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:33.784 18:37:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:33.784 18:37:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1275278 00:07:33.784 18:37:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:33.784 18:37:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:33.784 18:37:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:33.784 18:37:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:33.784 00:07:33.784 real 0m2.834s 00:07:33.784 user 0m0.014s 00:07:33.784 sys 0m0.042s 00:07:33.784 18:37:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.784 18:37:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:33.784 ************************************ 00:07:33.784 END TEST filesystem_xfs 00:07:33.784 ************************************ 00:07:33.784 18:37:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:34.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1275278 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 1275278 ']' 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 1275278 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1275278 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1275278' 00:07:34.042 killing process with pid 1275278 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 1275278 00:07:34.042 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 1275278 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:34.605 00:07:34.605 real 0m10.630s 00:07:34.605 user 0m40.620s 00:07:34.605 sys 0m1.599s 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.605 ************************************ 00:07:34.605 END TEST nvmf_filesystem_no_in_capsule 00:07:34.605 ************************************ 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.605 ************************************ 00:07:34.605 START TEST nvmf_filesystem_in_capsule 00:07:34.605 ************************************ 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1276815 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1276815 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 1276815 ']' 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:34.605 18:37:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.605 [2024-07-20 18:37:44.798649] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:34.605 [2024-07-20 18:37:44.798747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.605 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.605 [2024-07-20 18:37:44.867988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.862 [2024-07-20 18:37:44.958091] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.862 [2024-07-20 18:37:44.958149] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.862 [2024-07-20 18:37:44.958175] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.862 [2024-07-20 18:37:44.958189] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.862 [2024-07-20 18:37:44.958201] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.862 [2024-07-20 18:37:44.958284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.862 [2024-07-20 18:37:44.958351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.862 [2024-07-20 18:37:44.958442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.862 [2024-07-20 18:37:44.958444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.862 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:34.862 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:34.862 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:34.862 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.862 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.862 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:34.862 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:34.862 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 [2024-07-20 18:37:45.103577] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.862 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:34.862 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.118 Malloc1 00:07:35.118 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.118 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:35.118 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.118 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.118 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.118 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.119 [2024-07-20 18:37:45.288210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:35.119 { 00:07:35.119 "name": "Malloc1", 00:07:35.119 "aliases": [ 00:07:35.119 "827eff38-1f54-465a-addc-22ca3d28a1a4" 00:07:35.119 ], 00:07:35.119 "product_name": "Malloc disk", 00:07:35.119 "block_size": 512, 00:07:35.119 "num_blocks": 1048576, 00:07:35.119 "uuid": "827eff38-1f54-465a-addc-22ca3d28a1a4", 00:07:35.119 "assigned_rate_limits": { 00:07:35.119 "rw_ios_per_sec": 0, 00:07:35.119 "rw_mbytes_per_sec": 0, 00:07:35.119 "r_mbytes_per_sec": 0, 00:07:35.119 "w_mbytes_per_sec": 0 00:07:35.119 }, 00:07:35.119 "claimed": true, 00:07:35.119 "claim_type": "exclusive_write", 00:07:35.119 "zoned": false, 00:07:35.119 "supported_io_types": { 00:07:35.119 "read": true, 00:07:35.119 "write": true, 00:07:35.119 "unmap": true, 00:07:35.119 "write_zeroes": true, 00:07:35.119 "flush": true, 00:07:35.119 "reset": true, 00:07:35.119 "compare": false, 00:07:35.119 "compare_and_write": false, 00:07:35.119 "abort": true, 00:07:35.119 "nvme_admin": false, 00:07:35.119 "nvme_io": false 00:07:35.119 }, 00:07:35.119 "memory_domains": [ 00:07:35.119 { 00:07:35.119 "dma_device_id": "system", 00:07:35.119 "dma_device_type": 1 00:07:35.119 }, 00:07:35.119 { 00:07:35.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.119 "dma_device_type": 2 00:07:35.119 } 00:07:35.119 ], 00:07:35.119 "driver_specific": {} 00:07:35.119 } 00:07:35.119 ]' 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:35.119 18:37:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:36.049 18:37:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:36.049 18:37:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:36.049 18:37:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:36.049 18:37:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:36.049 18:37:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:37.942 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:37.942 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:37.942 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:37.942 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:37.942 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:37.942 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:37.942 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:37.943 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:37.943 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:37.943 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:37.943 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:37.943 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:37.943 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:37.943 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:37.943 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:37.943 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:37.943 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:38.199 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:38.457 18:37:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:39.391 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:39.391 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:39.391 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:39.391 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.391 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.649 ************************************ 00:07:39.649 START TEST filesystem_in_capsule_ext4 00:07:39.649 ************************************ 00:07:39.649 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:39.649 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:39.649 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:39.649 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:39.649 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:39.649 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:39.649 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:39.649 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:39.649 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:39.649 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:39.649 18:37:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:39.649 mke2fs 1.46.5 (30-Dec-2021) 00:07:39.649 Discarding device blocks: 0/522240 done 00:07:39.649 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:39.649 Filesystem UUID: 25a58a43-ad02-43f2-9629-f0a468a2f9bb 00:07:39.649 Superblock backups stored on blocks: 00:07:39.649 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:39.649 00:07:39.649 Allocating group tables: 0/64 done 00:07:39.649 Writing inode tables: 0/64 done 00:07:40.215 Creating journal (8192 blocks): done 00:07:41.035 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:07:41.035 00:07:41.035 18:37:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:41.035 18:37:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1276815 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.967 00:07:41.967 real 0m2.398s 00:07:41.967 user 0m0.013s 00:07:41.967 sys 0m0.039s 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:41.967 ************************************ 00:07:41.967 END TEST filesystem_in_capsule_ext4 00:07:41.967 ************************************ 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.967 ************************************ 00:07:41.967 START TEST filesystem_in_capsule_btrfs 00:07:41.967 ************************************ 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:41.967 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:42.224 btrfs-progs v6.6.2 00:07:42.224 See https://btrfs.readthedocs.io for more information. 00:07:42.224 00:07:42.224 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:42.224 NOTE: several default settings have changed in version 5.15, please make sure 00:07:42.224 this does not affect your deployments: 00:07:42.224 - DUP for metadata (-m dup) 00:07:42.224 - enabled no-holes (-O no-holes) 00:07:42.224 - enabled free-space-tree (-R free-space-tree) 00:07:42.224 00:07:42.224 Label: (null) 00:07:42.224 UUID: fd90eb30-2e03-451f-abed-aa171a9b1eaa 00:07:42.224 Node size: 16384 00:07:42.224 Sector size: 4096 00:07:42.224 Filesystem size: 510.00MiB 00:07:42.224 Block group profiles: 00:07:42.225 Data: single 8.00MiB 00:07:42.225 Metadata: DUP 32.00MiB 00:07:42.225 System: DUP 8.00MiB 00:07:42.225 SSD detected: yes 00:07:42.225 Zoned device: no 00:07:42.225 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:42.225 Runtime features: free-space-tree 00:07:42.225 Checksum: crc32c 00:07:42.225 Number of devices: 1 00:07:42.225 Devices: 00:07:42.225 ID SIZE PATH 00:07:42.225 1 510.00MiB /dev/nvme0n1p1 00:07:42.225 00:07:42.225 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:42.225 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.481 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.481 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:42.481 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.481 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:42.481 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:42.481 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.481 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1276815 00:07:42.481 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.481 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.481 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.481 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.481 00:07:42.481 real 0m0.612s 00:07:42.481 user 0m0.021s 00:07:42.481 sys 0m0.036s 00:07:42.481 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.481 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:42.481 ************************************ 00:07:42.481 END TEST filesystem_in_capsule_btrfs 00:07:42.482 ************************************ 00:07:42.482 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:42.482 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:42.482 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.482 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.739 ************************************ 00:07:42.739 START TEST filesystem_in_capsule_xfs 00:07:42.739 ************************************ 00:07:42.739 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:42.739 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:42.739 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:42.739 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:42.739 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:42.739 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:42.739 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:42.739 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:42.739 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:42.739 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:42.739 18:37:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:42.739 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:42.739 = sectsz=512 attr=2, projid32bit=1 00:07:42.739 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:42.739 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:42.739 data = bsize=4096 blocks=130560, imaxpct=25 00:07:42.739 = sunit=0 swidth=0 blks 00:07:42.739 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:42.739 log =internal log bsize=4096 blocks=16384, version=2 00:07:42.739 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:42.739 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:43.669 Discarding blocks...Done. 00:07:43.669 18:37:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:43.669 18:37:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:45.597 18:37:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:45.597 18:37:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:45.597 18:37:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:45.597 18:37:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:45.597 18:37:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:45.597 18:37:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:45.597 18:37:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1276815 00:07:45.597 18:37:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:45.597 18:37:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:45.597 18:37:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:45.597 18:37:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:45.597 00:07:45.597 real 0m2.963s 00:07:45.597 user 0m0.010s 00:07:45.597 sys 0m0.048s 00:07:45.597 18:37:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.597 18:37:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:45.597 ************************************ 00:07:45.597 END TEST filesystem_in_capsule_xfs 00:07:45.597 ************************************ 00:07:45.597 18:37:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:45.855 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:45.855 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:45.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.855 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:45.855 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:45.855 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:45.855 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.855 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:45.855 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1276815 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 1276815 ']' 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 1276815 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1276815 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1276815' 00:07:46.113 killing process with pid 1276815 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 1276815 00:07:46.113 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 1276815 00:07:46.371 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:46.371 00:07:46.371 real 0m11.931s 00:07:46.371 user 0m45.731s 00:07:46.371 sys 0m1.663s 00:07:46.371 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.371 18:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.371 ************************************ 00:07:46.371 END TEST nvmf_filesystem_in_capsule 00:07:46.371 ************************************ 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:46.630 rmmod nvme_tcp 00:07:46.630 rmmod nvme_fabrics 00:07:46.630 rmmod nvme_keyring 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.630 18:37:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.539 18:37:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:48.539 00:07:48.539 real 0m26.972s 00:07:48.539 user 1m27.212s 00:07:48.539 sys 0m4.806s 00:07:48.539 18:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.539 18:37:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.539 ************************************ 00:07:48.539 END TEST nvmf_filesystem 00:07:48.539 ************************************ 00:07:48.539 18:37:58 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:48.539 18:37:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:48.539 18:37:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.539 18:37:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:48.539 ************************************ 00:07:48.539 START TEST nvmf_target_discovery 00:07:48.539 ************************************ 00:07:48.539 18:37:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:48.797 * Looking for test storage... 00:07:48.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.797 18:37:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:48.798 18:37:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:50.699 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:50.699 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:50.699 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:50.699 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.699 18:38:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.699 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:50.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:07:50.958 00:07:50.958 --- 10.0.0.2 ping statistics --- 00:07:50.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.958 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:07:50.958 00:07:50.958 --- 10.0.0.1 ping statistics --- 00:07:50.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.958 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1280303 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1280303 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 1280303 ']' 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:50.958 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:50.958 [2024-07-20 18:38:01.165699] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:50.958 [2024-07-20 18:38:01.165804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.958 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.958 [2024-07-20 18:38:01.234759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.216 [2024-07-20 18:38:01.325101] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.216 [2024-07-20 18:38:01.325160] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.216 [2024-07-20 18:38:01.325173] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.216 [2024-07-20 18:38:01.325191] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.216 [2024-07-20 18:38:01.325202] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.216 [2024-07-20 18:38:01.325267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.216 [2024-07-20 18:38:01.325323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.216 [2024-07-20 18:38:01.325388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.216 [2024-07-20 18:38:01.325390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.216 [2024-07-20 18:38:01.487547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.216 Null1 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.216 [2024-07-20 18:38:01.527918] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.216 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.473 Null2 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.473 Null3 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.473 Null4 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.473 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:07:51.473 00:07:51.473 Discovery Log Number of Records 6, Generation counter 6 00:07:51.473 =====Discovery Log Entry 0====== 00:07:51.473 trtype: tcp 00:07:51.473 adrfam: ipv4 00:07:51.473 subtype: current discovery subsystem 00:07:51.473 treq: not required 00:07:51.473 portid: 0 00:07:51.473 trsvcid: 4420 00:07:51.473 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:51.473 traddr: 10.0.0.2 00:07:51.473 eflags: explicit discovery connections, duplicate discovery information 00:07:51.473 sectype: none 00:07:51.473 =====Discovery Log Entry 1====== 00:07:51.473 trtype: tcp 00:07:51.473 adrfam: ipv4 00:07:51.473 subtype: nvme subsystem 00:07:51.473 treq: not required 00:07:51.473 portid: 0 00:07:51.473 trsvcid: 4420 00:07:51.473 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:51.473 traddr: 10.0.0.2 00:07:51.473 eflags: none 00:07:51.473 sectype: none 00:07:51.473 =====Discovery Log Entry 2====== 00:07:51.473 trtype: tcp 00:07:51.473 adrfam: ipv4 00:07:51.473 subtype: nvme subsystem 00:07:51.473 treq: not required 00:07:51.473 portid: 0 00:07:51.473 trsvcid: 4420 00:07:51.473 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:51.473 traddr: 10.0.0.2 00:07:51.473 eflags: none 00:07:51.473 sectype: none 00:07:51.473 =====Discovery Log Entry 3====== 00:07:51.473 trtype: tcp 00:07:51.473 adrfam: ipv4 00:07:51.473 subtype: nvme subsystem 00:07:51.473 treq: not required 00:07:51.473 portid: 0 00:07:51.473 trsvcid: 4420 00:07:51.473 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:51.473 traddr: 10.0.0.2 00:07:51.473 eflags: none 00:07:51.473 sectype: none 00:07:51.473 =====Discovery Log Entry 4====== 00:07:51.473 trtype: tcp 00:07:51.473 adrfam: ipv4 00:07:51.473 subtype: nvme subsystem 00:07:51.473 treq: not required 00:07:51.473 portid: 0 00:07:51.473 trsvcid: 4420 00:07:51.473 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:51.473 traddr: 10.0.0.2 00:07:51.473 eflags: none 00:07:51.473 sectype: none 00:07:51.473 =====Discovery Log Entry 5====== 00:07:51.473 trtype: tcp 00:07:51.473 adrfam: ipv4 00:07:51.473 subtype: discovery subsystem referral 00:07:51.473 treq: not required 00:07:51.473 portid: 0 00:07:51.473 trsvcid: 4430 00:07:51.473 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:51.473 traddr: 10.0.0.2 00:07:51.473 eflags: none 00:07:51.474 sectype: none 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:51.474 Perform nvmf subsystem discovery via RPC 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.474 [ 00:07:51.474 { 00:07:51.474 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:51.474 "subtype": "Discovery", 00:07:51.474 "listen_addresses": [ 00:07:51.474 { 00:07:51.474 "trtype": "TCP", 00:07:51.474 "adrfam": "IPv4", 00:07:51.474 "traddr": "10.0.0.2", 00:07:51.474 "trsvcid": "4420" 00:07:51.474 } 00:07:51.474 ], 00:07:51.474 "allow_any_host": true, 00:07:51.474 "hosts": [] 00:07:51.474 }, 00:07:51.474 { 00:07:51.474 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:51.474 "subtype": "NVMe", 00:07:51.474 "listen_addresses": [ 00:07:51.474 { 00:07:51.474 "trtype": "TCP", 00:07:51.474 "adrfam": "IPv4", 00:07:51.474 "traddr": "10.0.0.2", 00:07:51.474 "trsvcid": "4420" 00:07:51.474 } 00:07:51.474 ], 00:07:51.474 "allow_any_host": true, 00:07:51.474 "hosts": [], 00:07:51.474 "serial_number": "SPDK00000000000001", 00:07:51.474 "model_number": "SPDK bdev Controller", 00:07:51.474 "max_namespaces": 32, 00:07:51.474 "min_cntlid": 1, 00:07:51.474 "max_cntlid": 65519, 00:07:51.474 "namespaces": [ 00:07:51.474 { 00:07:51.474 "nsid": 1, 00:07:51.474 "bdev_name": "Null1", 00:07:51.474 "name": "Null1", 00:07:51.474 "nguid": "D43FEDCBBBAC4EA2B1ADE3AC4AECEEC0", 00:07:51.474 "uuid": "d43fedcb-bbac-4ea2-b1ad-e3ac4aeceec0" 00:07:51.474 } 00:07:51.474 ] 00:07:51.474 }, 00:07:51.474 { 00:07:51.474 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:51.474 "subtype": "NVMe", 00:07:51.474 "listen_addresses": [ 00:07:51.474 { 00:07:51.474 "trtype": "TCP", 00:07:51.474 "adrfam": "IPv4", 00:07:51.474 "traddr": "10.0.0.2", 00:07:51.474 "trsvcid": "4420" 00:07:51.474 } 00:07:51.474 ], 00:07:51.474 "allow_any_host": true, 00:07:51.474 "hosts": [], 00:07:51.474 "serial_number": "SPDK00000000000002", 00:07:51.474 "model_number": "SPDK bdev Controller", 00:07:51.474 "max_namespaces": 32, 00:07:51.474 "min_cntlid": 1, 00:07:51.474 "max_cntlid": 65519, 00:07:51.474 "namespaces": [ 00:07:51.474 { 00:07:51.474 "nsid": 1, 00:07:51.474 "bdev_name": "Null2", 00:07:51.474 "name": "Null2", 00:07:51.474 "nguid": "A14B1D2F16AB450E8CE9BF2A89E1E0CE", 00:07:51.474 "uuid": "a14b1d2f-16ab-450e-8ce9-bf2a89e1e0ce" 00:07:51.474 } 00:07:51.474 ] 00:07:51.474 }, 00:07:51.474 { 00:07:51.474 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:51.474 "subtype": "NVMe", 00:07:51.474 "listen_addresses": [ 00:07:51.474 { 00:07:51.474 "trtype": "TCP", 00:07:51.474 "adrfam": "IPv4", 00:07:51.474 "traddr": "10.0.0.2", 00:07:51.474 "trsvcid": "4420" 00:07:51.474 } 00:07:51.474 ], 00:07:51.474 "allow_any_host": true, 00:07:51.474 "hosts": [], 00:07:51.474 "serial_number": "SPDK00000000000003", 00:07:51.474 "model_number": "SPDK bdev Controller", 00:07:51.474 "max_namespaces": 32, 00:07:51.474 "min_cntlid": 1, 00:07:51.474 "max_cntlid": 65519, 00:07:51.474 "namespaces": [ 00:07:51.474 { 00:07:51.474 "nsid": 1, 00:07:51.474 "bdev_name": "Null3", 00:07:51.474 "name": "Null3", 00:07:51.474 "nguid": "F2D1537E244942B98DECF78C86A80DE6", 00:07:51.474 "uuid": "f2d1537e-2449-42b9-8dec-f78c86a80de6" 00:07:51.474 } 00:07:51.474 ] 00:07:51.474 }, 00:07:51.474 { 00:07:51.474 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:51.474 "subtype": "NVMe", 00:07:51.474 "listen_addresses": [ 00:07:51.474 { 00:07:51.474 "trtype": "TCP", 00:07:51.474 "adrfam": "IPv4", 00:07:51.474 "traddr": "10.0.0.2", 00:07:51.474 "trsvcid": "4420" 00:07:51.474 } 00:07:51.474 ], 00:07:51.474 "allow_any_host": true, 00:07:51.474 "hosts": [], 00:07:51.474 "serial_number": "SPDK00000000000004", 00:07:51.474 "model_number": "SPDK bdev Controller", 00:07:51.474 "max_namespaces": 32, 00:07:51.474 "min_cntlid": 1, 00:07:51.474 "max_cntlid": 65519, 00:07:51.474 "namespaces": [ 00:07:51.474 { 00:07:51.474 "nsid": 1, 00:07:51.474 "bdev_name": "Null4", 00:07:51.474 "name": "Null4", 00:07:51.474 "nguid": "E36EA805697D4AC1BFA1A8BC1F0125FE", 00:07:51.474 "uuid": "e36ea805-697d-4ac1-bfa1-a8bc1f0125fe" 00:07:51.474 } 00:07:51.474 ] 00:07:51.474 } 00:07:51.474 ] 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.474 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:51.731 rmmod nvme_tcp 00:07:51.731 rmmod nvme_fabrics 00:07:51.731 rmmod nvme_keyring 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1280303 ']' 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1280303 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 1280303 ']' 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 1280303 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1280303 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1280303' 00:07:51.731 killing process with pid 1280303 00:07:51.731 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 1280303 00:07:51.732 18:38:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 1280303 00:07:51.989 18:38:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:51.989 18:38:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:51.989 18:38:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:51.989 18:38:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:51.989 18:38:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:51.989 18:38:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.989 18:38:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.989 18:38:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.886 18:38:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:53.886 00:07:53.886 real 0m5.323s 00:07:53.886 user 0m3.999s 00:07:53.886 sys 0m1.839s 00:07:53.886 18:38:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.886 18:38:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.886 ************************************ 00:07:53.886 END TEST nvmf_target_discovery 00:07:53.886 ************************************ 00:07:53.886 18:38:04 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:53.886 18:38:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:53.886 18:38:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.886 18:38:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:54.143 ************************************ 00:07:54.143 START TEST nvmf_referrals 00:07:54.143 ************************************ 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:54.143 * Looking for test storage... 00:07:54.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:54.143 18:38:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:54.144 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:54.144 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.144 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:54.144 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:54.144 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:54.144 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.144 18:38:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.144 18:38:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.144 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:54.144 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:54.144 18:38:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:54.144 18:38:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:56.040 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:56.040 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:56.040 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:56.040 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:56.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:07:56.040 00:07:56.040 --- 10.0.0.2 ping statistics --- 00:07:56.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.040 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:07:56.040 00:07:56.040 --- 10.0.0.1 ping statistics --- 00:07:56.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.040 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1282281 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1282281 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 1282281 ']' 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:56.040 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.040 [2024-07-20 18:38:06.341773] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:56.041 [2024-07-20 18:38:06.341875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.297 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.298 [2024-07-20 18:38:06.411687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.298 [2024-07-20 18:38:06.500120] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.298 [2024-07-20 18:38:06.500178] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.298 [2024-07-20 18:38:06.500192] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.298 [2024-07-20 18:38:06.500203] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.298 [2024-07-20 18:38:06.500213] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.298 [2024-07-20 18:38:06.500276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.298 [2024-07-20 18:38:06.500333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.298 [2024-07-20 18:38:06.500404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.298 [2024-07-20 18:38:06.500406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.298 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:56.298 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.555 [2024-07-20 18:38:06.651516] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.555 [2024-07-20 18:38:06.663760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:56.555 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.812 18:38:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.812 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.812 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:56.812 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:56.812 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:56.812 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:56.813 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.813 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:56.813 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:56.813 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:56.813 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:56.813 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:56.813 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.813 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:57.070 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:57.328 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:57.328 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:57.328 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:57.328 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:57.328 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:57.328 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:57.328 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:57.329 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:57.587 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:57.846 rmmod nvme_tcp 00:07:57.846 rmmod nvme_fabrics 00:07:57.846 rmmod nvme_keyring 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1282281 ']' 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1282281 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 1282281 ']' 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 1282281 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:57.846 18:38:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1282281 00:07:57.846 18:38:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:57.846 18:38:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:57.846 18:38:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1282281' 00:07:57.846 killing process with pid 1282281 00:07:57.846 18:38:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 1282281 00:07:57.846 18:38:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 1282281 00:07:58.105 18:38:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:58.105 18:38:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:58.105 18:38:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:58.105 18:38:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:58.105 18:38:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:58.105 18:38:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.105 18:38:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.105 18:38:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.005 18:38:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:00.005 00:08:00.005 real 0m6.032s 00:08:00.005 user 0m8.122s 00:08:00.005 sys 0m1.865s 00:08:00.005 18:38:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.005 18:38:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:00.005 ************************************ 00:08:00.005 END TEST nvmf_referrals 00:08:00.005 ************************************ 00:08:00.005 18:38:10 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:00.005 18:38:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:00.005 18:38:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.005 18:38:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:00.005 ************************************ 00:08:00.005 START TEST nvmf_connect_disconnect 00:08:00.005 ************************************ 00:08:00.005 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:00.264 * Looking for test storage... 00:08:00.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.264 18:38:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:02.177 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:02.177 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:02.177 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:02.177 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:02.177 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:02.178 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:02.178 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.178 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.178 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.178 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.178 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.178 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.178 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.178 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.178 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.178 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.178 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.178 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:02.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:08:02.436 00:08:02.436 --- 10.0.0.2 ping statistics --- 00:08:02.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.436 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:08:02.436 00:08:02.436 --- 10.0.0.1 ping statistics --- 00:08:02.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.436 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1284562 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1284562 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 1284562 ']' 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:02.436 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.436 [2024-07-20 18:38:12.707801] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:02.436 [2024-07-20 18:38:12.707895] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.436 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.694 [2024-07-20 18:38:12.776897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.694 [2024-07-20 18:38:12.869467] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.694 [2024-07-20 18:38:12.869530] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.694 [2024-07-20 18:38:12.869557] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.694 [2024-07-20 18:38:12.869571] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.694 [2024-07-20 18:38:12.869583] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.694 [2024-07-20 18:38:12.869676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.694 [2024-07-20 18:38:12.869752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.694 [2024-07-20 18:38:12.869850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.694 [2024-07-20 18:38:12.869854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.694 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:02.694 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:02.694 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:02.694 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.694 18:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.951 [2024-07-20 18:38:13.025672] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.951 [2024-07-20 18:38:13.082494] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:02.951 18:38:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:05.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:50.020 rmmod nvme_tcp 00:11:50.020 rmmod nvme_fabrics 00:11:50.020 rmmod nvme_keyring 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1284562 ']' 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1284562 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1284562 ']' 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 1284562 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:11:50.020 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:50.278 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1284562 00:11:50.278 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:50.278 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:50.278 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1284562' 00:11:50.278 killing process with pid 1284562 00:11:50.278 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 1284562 00:11:50.279 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 1284562 00:11:50.537 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:50.537 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:50.537 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:50.537 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:50.537 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:50.537 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.537 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.537 18:42:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.436 18:42:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:52.436 00:11:52.436 real 3m52.349s 00:11:52.436 user 14m43.183s 00:11:52.436 sys 0m31.454s 00:11:52.436 18:42:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:52.436 18:42:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.436 ************************************ 00:11:52.436 END TEST nvmf_connect_disconnect 00:11:52.436 ************************************ 00:11:52.436 18:42:02 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:52.436 18:42:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:52.436 18:42:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:52.436 18:42:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:52.436 ************************************ 00:11:52.436 START TEST nvmf_multitarget 00:11:52.436 ************************************ 00:11:52.436 18:42:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:52.436 * Looking for test storage... 00:11:52.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.436 18:42:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.436 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:52.436 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.436 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.436 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.436 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.436 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.436 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.436 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.436 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.436 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.436 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:52.695 18:42:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:54.591 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:54.592 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:54.592 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:54.592 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:54.592 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.592 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.849 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.849 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.849 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:54.849 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.849 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.849 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.849 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:54.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:11:54.849 00:11:54.849 --- 10.0.0.2 ping statistics --- 00:11:54.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.849 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:11:54.849 18:42:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:11:54.849 00:11:54.849 --- 10.0.0.1 ping statistics --- 00:11:54.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.849 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1315319 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1315319 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 1315319 ']' 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:54.849 18:42:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:54.849 [2024-07-20 18:42:05.075780] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:54.849 [2024-07-20 18:42:05.075896] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.849 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.849 [2024-07-20 18:42:05.145610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.106 [2024-07-20 18:42:05.234338] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.106 [2024-07-20 18:42:05.234388] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.106 [2024-07-20 18:42:05.234412] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.106 [2024-07-20 18:42:05.234424] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.106 [2024-07-20 18:42:05.234434] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.106 [2024-07-20 18:42:05.234484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.106 [2024-07-20 18:42:05.234548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.106 [2024-07-20 18:42:05.234612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.107 [2024-07-20 18:42:05.234615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.107 18:42:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:55.107 18:42:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:11:55.107 18:42:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:55.107 18:42:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:55.107 18:42:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:55.107 18:42:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.107 18:42:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:55.107 18:42:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:55.107 18:42:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:55.363 18:42:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:55.363 18:42:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:55.363 "nvmf_tgt_1" 00:11:55.363 18:42:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:55.619 "nvmf_tgt_2" 00:11:55.619 18:42:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:55.619 18:42:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:55.619 18:42:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:55.619 18:42:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:55.875 true 00:11:55.875 18:42:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:55.875 true 00:11:55.875 18:42:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:55.875 18:42:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:56.132 rmmod nvme_tcp 00:11:56.132 rmmod nvme_fabrics 00:11:56.132 rmmod nvme_keyring 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1315319 ']' 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1315319 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 1315319 ']' 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 1315319 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1315319 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1315319' 00:11:56.132 killing process with pid 1315319 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 1315319 00:11:56.132 18:42:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 1315319 00:11:56.390 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:56.390 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:56.390 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:56.390 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:56.390 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:56.390 18:42:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.390 18:42:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.390 18:42:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.287 18:42:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:58.287 00:11:58.287 real 0m5.866s 00:11:58.287 user 0m6.602s 00:11:58.287 sys 0m2.011s 00:11:58.287 18:42:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:58.287 18:42:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:58.287 ************************************ 00:11:58.287 END TEST nvmf_multitarget 00:11:58.287 ************************************ 00:11:58.287 18:42:08 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:58.287 18:42:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:58.287 18:42:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:58.287 18:42:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:58.547 ************************************ 00:11:58.547 START TEST nvmf_rpc 00:11:58.547 ************************************ 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:58.547 * Looking for test storage... 00:11:58.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:11:58.547 18:42:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:00.463 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:00.463 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:00.463 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:00.463 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:00.463 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.464 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.464 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:00.464 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:00.464 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.464 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.464 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.464 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.464 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:00.464 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:00.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:12:00.721 00:12:00.721 --- 10.0.0.2 ping statistics --- 00:12:00.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.721 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:12:00.721 00:12:00.721 --- 10.0.0.1 ping statistics --- 00:12:00.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.721 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1317927 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1317927 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 1317927 ']' 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:00.721 18:42:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.721 [2024-07-20 18:42:10.904529] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:00.721 [2024-07-20 18:42:10.904619] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.721 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.721 [2024-07-20 18:42:10.970283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.978 [2024-07-20 18:42:11.061951] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.978 [2024-07-20 18:42:11.062006] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.978 [2024-07-20 18:42:11.062033] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.978 [2024-07-20 18:42:11.062047] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.978 [2024-07-20 18:42:11.062058] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.978 [2024-07-20 18:42:11.062123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.978 [2024-07-20 18:42:11.062175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.978 [2024-07-20 18:42:11.062288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.978 [2024-07-20 18:42:11.062290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:00.978 "tick_rate": 2700000000, 00:12:00.978 "poll_groups": [ 00:12:00.978 { 00:12:00.978 "name": "nvmf_tgt_poll_group_000", 00:12:00.978 "admin_qpairs": 0, 00:12:00.978 "io_qpairs": 0, 00:12:00.978 "current_admin_qpairs": 0, 00:12:00.978 "current_io_qpairs": 0, 00:12:00.978 "pending_bdev_io": 0, 00:12:00.978 "completed_nvme_io": 0, 00:12:00.978 "transports": [] 00:12:00.978 }, 00:12:00.978 { 00:12:00.978 "name": "nvmf_tgt_poll_group_001", 00:12:00.978 "admin_qpairs": 0, 00:12:00.978 "io_qpairs": 0, 00:12:00.978 "current_admin_qpairs": 0, 00:12:00.978 "current_io_qpairs": 0, 00:12:00.978 "pending_bdev_io": 0, 00:12:00.978 "completed_nvme_io": 0, 00:12:00.978 "transports": [] 00:12:00.978 }, 00:12:00.978 { 00:12:00.978 "name": "nvmf_tgt_poll_group_002", 00:12:00.978 "admin_qpairs": 0, 00:12:00.978 "io_qpairs": 0, 00:12:00.978 "current_admin_qpairs": 0, 00:12:00.978 "current_io_qpairs": 0, 00:12:00.978 "pending_bdev_io": 0, 00:12:00.978 "completed_nvme_io": 0, 00:12:00.978 "transports": [] 00:12:00.978 }, 00:12:00.978 { 00:12:00.978 "name": "nvmf_tgt_poll_group_003", 00:12:00.978 "admin_qpairs": 0, 00:12:00.978 "io_qpairs": 0, 00:12:00.978 "current_admin_qpairs": 0, 00:12:00.978 "current_io_qpairs": 0, 00:12:00.978 "pending_bdev_io": 0, 00:12:00.978 "completed_nvme_io": 0, 00:12:00.978 "transports": [] 00:12:00.978 } 00:12:00.978 ] 00:12:00.978 }' 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:00.978 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.235 [2024-07-20 18:42:11.305916] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:01.235 "tick_rate": 2700000000, 00:12:01.235 "poll_groups": [ 00:12:01.235 { 00:12:01.235 "name": "nvmf_tgt_poll_group_000", 00:12:01.235 "admin_qpairs": 0, 00:12:01.235 "io_qpairs": 0, 00:12:01.235 "current_admin_qpairs": 0, 00:12:01.235 "current_io_qpairs": 0, 00:12:01.235 "pending_bdev_io": 0, 00:12:01.235 "completed_nvme_io": 0, 00:12:01.235 "transports": [ 00:12:01.235 { 00:12:01.235 "trtype": "TCP" 00:12:01.235 } 00:12:01.235 ] 00:12:01.235 }, 00:12:01.235 { 00:12:01.235 "name": "nvmf_tgt_poll_group_001", 00:12:01.235 "admin_qpairs": 0, 00:12:01.235 "io_qpairs": 0, 00:12:01.235 "current_admin_qpairs": 0, 00:12:01.235 "current_io_qpairs": 0, 00:12:01.235 "pending_bdev_io": 0, 00:12:01.235 "completed_nvme_io": 0, 00:12:01.235 "transports": [ 00:12:01.235 { 00:12:01.235 "trtype": "TCP" 00:12:01.235 } 00:12:01.235 ] 00:12:01.235 }, 00:12:01.235 { 00:12:01.235 "name": "nvmf_tgt_poll_group_002", 00:12:01.235 "admin_qpairs": 0, 00:12:01.235 "io_qpairs": 0, 00:12:01.235 "current_admin_qpairs": 0, 00:12:01.235 "current_io_qpairs": 0, 00:12:01.235 "pending_bdev_io": 0, 00:12:01.235 "completed_nvme_io": 0, 00:12:01.235 "transports": [ 00:12:01.235 { 00:12:01.235 "trtype": "TCP" 00:12:01.235 } 00:12:01.235 ] 00:12:01.235 }, 00:12:01.235 { 00:12:01.235 "name": "nvmf_tgt_poll_group_003", 00:12:01.235 "admin_qpairs": 0, 00:12:01.235 "io_qpairs": 0, 00:12:01.235 "current_admin_qpairs": 0, 00:12:01.235 "current_io_qpairs": 0, 00:12:01.235 "pending_bdev_io": 0, 00:12:01.235 "completed_nvme_io": 0, 00:12:01.235 "transports": [ 00:12:01.235 { 00:12:01.235 "trtype": "TCP" 00:12:01.235 } 00:12:01.235 ] 00:12:01.235 } 00:12:01.235 ] 00:12:01.235 }' 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.235 Malloc1 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.235 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.236 [2024-07-20 18:42:11.459215] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:01.236 [2024-07-20 18:42:11.481781] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:01.236 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:01.236 could not add new controller: failed to write to nvme-fabrics device 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.236 18:42:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:02.166 18:42:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.166 18:42:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:02.166 18:42:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.166 18:42:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:02.166 18:42:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:04.073 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:04.073 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:04.073 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.073 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:04.073 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.073 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:04.073 18:42:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.073 18:42:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:04.073 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.074 [2024-07-20 18:42:14.242945] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:04.074 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:04.074 could not add new controller: failed to write to nvme-fabrics device 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.074 18:42:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.638 18:42:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:04.638 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:04.638 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.638 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:04.638 18:42:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:07.184 18:42:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:07.184 18:42:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:07.184 18:42:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.184 18:42:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:07.184 18:42:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.184 18:42:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:07.184 18:42:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.184 18:42:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.184 18:42:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:07.184 18:42:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:07.184 18:42:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.184 18:42:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:07.184 18:42:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.184 [2024-07-20 18:42:17.030051] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.184 18:42:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.442 18:42:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.442 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:07.442 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.442 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:07.442 18:42:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:09.338 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:09.338 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:09.338 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.338 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:09.338 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.338 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:09.338 18:42:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.596 [2024-07-20 18:42:19.719453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.596 18:42:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.160 18:42:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.160 18:42:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:10.160 18:42:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.160 18:42:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:10.160 18:42:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:12.052 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:12.052 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:12.052 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.052 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:12.052 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.052 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:12.052 18:42:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.309 [2024-07-20 18:42:22.450637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.309 18:42:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.871 18:42:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.871 18:42:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:12.871 18:42:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.871 18:42:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:12.871 18:42:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:14.772 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:14.772 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:14.772 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.772 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:14.772 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.772 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:14.772 18:42:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.066 [2024-07-20 18:42:25.140060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.066 18:42:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.631 18:42:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.631 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:15.631 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.631 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:15.631 18:42:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.529 [2024-07-20 18:42:27.833626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.529 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.787 18:42:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.787 18:42:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.352 18:42:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.352 18:42:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:18.352 18:42:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.352 18:42:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:18.352 18:42:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:20.246 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:20.246 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:20.246 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.246 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:20.246 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.246 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:20.246 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.246 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.246 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:20.246 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:20.246 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 [2024-07-20 18:42:30.621410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 [2024-07-20 18:42:30.669452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 [2024-07-20 18:42:30.717620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 [2024-07-20 18:42:30.765812] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 [2024-07-20 18:42:30.813983] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.506 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.763 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.763 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.763 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.763 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.763 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.763 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.763 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.763 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.763 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.763 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:20.763 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.763 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.763 18:42:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.763 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:20.763 "tick_rate": 2700000000, 00:12:20.763 "poll_groups": [ 00:12:20.763 { 00:12:20.763 "name": "nvmf_tgt_poll_group_000", 00:12:20.763 "admin_qpairs": 2, 00:12:20.763 "io_qpairs": 84, 00:12:20.763 "current_admin_qpairs": 0, 00:12:20.763 "current_io_qpairs": 0, 00:12:20.763 "pending_bdev_io": 0, 00:12:20.763 "completed_nvme_io": 135, 00:12:20.763 "transports": [ 00:12:20.763 { 00:12:20.763 "trtype": "TCP" 00:12:20.763 } 00:12:20.763 ] 00:12:20.763 }, 00:12:20.763 { 00:12:20.763 "name": "nvmf_tgt_poll_group_001", 00:12:20.763 "admin_qpairs": 2, 00:12:20.763 "io_qpairs": 84, 00:12:20.763 "current_admin_qpairs": 0, 00:12:20.763 "current_io_qpairs": 0, 00:12:20.763 "pending_bdev_io": 0, 00:12:20.763 "completed_nvme_io": 184, 00:12:20.763 "transports": [ 00:12:20.763 { 00:12:20.763 "trtype": "TCP" 00:12:20.763 } 00:12:20.763 ] 00:12:20.763 }, 00:12:20.763 { 00:12:20.763 "name": "nvmf_tgt_poll_group_002", 00:12:20.763 "admin_qpairs": 1, 00:12:20.763 "io_qpairs": 84, 00:12:20.763 "current_admin_qpairs": 0, 00:12:20.763 "current_io_qpairs": 0, 00:12:20.764 "pending_bdev_io": 0, 00:12:20.764 "completed_nvme_io": 184, 00:12:20.764 "transports": [ 00:12:20.764 { 00:12:20.764 "trtype": "TCP" 00:12:20.764 } 00:12:20.764 ] 00:12:20.764 }, 00:12:20.764 { 00:12:20.764 "name": "nvmf_tgt_poll_group_003", 00:12:20.764 "admin_qpairs": 2, 00:12:20.764 "io_qpairs": 84, 00:12:20.764 "current_admin_qpairs": 0, 00:12:20.764 "current_io_qpairs": 0, 00:12:20.764 "pending_bdev_io": 0, 00:12:20.764 "completed_nvme_io": 183, 00:12:20.764 "transports": [ 00:12:20.764 { 00:12:20.764 "trtype": "TCP" 00:12:20.764 } 00:12:20.764 ] 00:12:20.764 } 00:12:20.764 ] 00:12:20.764 }' 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:20.764 18:42:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:20.764 rmmod nvme_tcp 00:12:20.764 rmmod nvme_fabrics 00:12:20.764 rmmod nvme_keyring 00:12:20.764 18:42:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:20.764 18:42:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:20.764 18:42:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:20.764 18:42:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1317927 ']' 00:12:20.764 18:42:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1317927 00:12:20.764 18:42:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 1317927 ']' 00:12:20.764 18:42:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 1317927 00:12:20.764 18:42:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:20.764 18:42:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:20.764 18:42:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1317927 00:12:20.764 18:42:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:20.764 18:42:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:20.764 18:42:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1317927' 00:12:20.764 killing process with pid 1317927 00:12:20.764 18:42:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 1317927 00:12:20.764 18:42:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 1317927 00:12:21.020 18:42:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:21.020 18:42:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:21.020 18:42:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:21.020 18:42:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.020 18:42:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:21.020 18:42:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.020 18:42:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.020 18:42:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.543 18:42:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:23.543 00:12:23.543 real 0m24.707s 00:12:23.543 user 1m19.842s 00:12:23.543 sys 0m3.923s 00:12:23.543 18:42:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:23.543 18:42:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.543 ************************************ 00:12:23.543 END TEST nvmf_rpc 00:12:23.543 ************************************ 00:12:23.543 18:42:33 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:23.543 18:42:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:23.543 18:42:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:23.543 18:42:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:23.543 ************************************ 00:12:23.543 START TEST nvmf_invalid 00:12:23.543 ************************************ 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:23.543 * Looking for test storage... 00:12:23.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.543 18:42:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:23.544 18:42:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:25.445 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:25.445 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:25.445 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:25.445 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:25.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:12:25.445 00:12:25.445 --- 10.0.0.2 ping statistics --- 00:12:25.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.445 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:12:25.445 00:12:25.445 --- 10.0.0.1 ping statistics --- 00:12:25.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.445 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1322356 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1322356 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 1322356 ']' 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:25.445 18:42:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:25.445 [2024-07-20 18:42:35.693728] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:25.445 [2024-07-20 18:42:35.693837] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.445 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.445 [2024-07-20 18:42:35.763015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.704 [2024-07-20 18:42:35.853813] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.704 [2024-07-20 18:42:35.853877] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.704 [2024-07-20 18:42:35.853904] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.704 [2024-07-20 18:42:35.853919] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.704 [2024-07-20 18:42:35.853931] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.704 [2024-07-20 18:42:35.854024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.704 [2024-07-20 18:42:35.854079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.704 [2024-07-20 18:42:35.854136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.704 [2024-07-20 18:42:35.854138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.704 18:42:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:25.704 18:42:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:25.704 18:42:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:25.704 18:42:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:25.704 18:42:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:25.704 18:42:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.704 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:25.704 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20900 00:12:25.962 [2024-07-20 18:42:36.280396] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:26.220 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:26.220 { 00:12:26.220 "nqn": "nqn.2016-06.io.spdk:cnode20900", 00:12:26.221 "tgt_name": "foobar", 00:12:26.221 "method": "nvmf_create_subsystem", 00:12:26.221 "req_id": 1 00:12:26.221 } 00:12:26.221 Got JSON-RPC error response 00:12:26.221 response: 00:12:26.221 { 00:12:26.221 "code": -32603, 00:12:26.221 "message": "Unable to find target foobar" 00:12:26.221 }' 00:12:26.221 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:26.221 { 00:12:26.221 "nqn": "nqn.2016-06.io.spdk:cnode20900", 00:12:26.221 "tgt_name": "foobar", 00:12:26.221 "method": "nvmf_create_subsystem", 00:12:26.221 "req_id": 1 00:12:26.221 } 00:12:26.221 Got JSON-RPC error response 00:12:26.221 response: 00:12:26.221 { 00:12:26.221 "code": -32603, 00:12:26.221 "message": "Unable to find target foobar" 00:12:26.221 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:26.221 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:26.221 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode384 00:12:26.479 [2024-07-20 18:42:36.557371] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode384: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:26.479 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:26.479 { 00:12:26.479 "nqn": "nqn.2016-06.io.spdk:cnode384", 00:12:26.479 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:26.479 "method": "nvmf_create_subsystem", 00:12:26.479 "req_id": 1 00:12:26.479 } 00:12:26.479 Got JSON-RPC error response 00:12:26.479 response: 00:12:26.479 { 00:12:26.479 "code": -32602, 00:12:26.479 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:26.479 }' 00:12:26.479 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:26.479 { 00:12:26.479 "nqn": "nqn.2016-06.io.spdk:cnode384", 00:12:26.479 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:26.479 "method": "nvmf_create_subsystem", 00:12:26.479 "req_id": 1 00:12:26.479 } 00:12:26.479 Got JSON-RPC error response 00:12:26.479 response: 00:12:26.479 { 00:12:26.479 "code": -32602, 00:12:26.479 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:26.479 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:26.479 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:26.479 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2043 00:12:26.738 [2024-07-20 18:42:36.830266] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2043: invalid model number 'SPDK_Controller' 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:26.738 { 00:12:26.738 "nqn": "nqn.2016-06.io.spdk:cnode2043", 00:12:26.738 "model_number": "SPDK_Controller\u001f", 00:12:26.738 "method": "nvmf_create_subsystem", 00:12:26.738 "req_id": 1 00:12:26.738 } 00:12:26.738 Got JSON-RPC error response 00:12:26.738 response: 00:12:26.738 { 00:12:26.738 "code": -32602, 00:12:26.738 "message": "Invalid MN SPDK_Controller\u001f" 00:12:26.738 }' 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:26.738 { 00:12:26.738 "nqn": "nqn.2016-06.io.spdk:cnode2043", 00:12:26.738 "model_number": "SPDK_Controller\u001f", 00:12:26.738 "method": "nvmf_create_subsystem", 00:12:26.738 "req_id": 1 00:12:26.738 } 00:12:26.738 Got JSON-RPC error response 00:12:26.738 response: 00:12:26.738 { 00:12:26.738 "code": -32602, 00:12:26.738 "message": "Invalid MN SPDK_Controller\u001f" 00:12:26.738 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:26.738 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ W == \- ]] 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'W+/"M%&@Ou3/#V&=0\2?:' 00:12:26.739 18:42:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'W+/"M%&@Ou3/#V&=0\2?:' nqn.2016-06.io.spdk:cnode12190 00:12:26.998 [2024-07-20 18:42:37.127287] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12190: invalid serial number 'W+/"M%&@Ou3/#V&=0\2?:' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:26.998 { 00:12:26.998 "nqn": "nqn.2016-06.io.spdk:cnode12190", 00:12:26.998 "serial_number": "W+/\"M%&@Ou3/#V&=0\\2?:", 00:12:26.998 "method": "nvmf_create_subsystem", 00:12:26.998 "req_id": 1 00:12:26.998 } 00:12:26.998 Got JSON-RPC error response 00:12:26.998 response: 00:12:26.998 { 00:12:26.998 "code": -32602, 00:12:26.998 "message": "Invalid SN W+/\"M%&@Ou3/#V&=0\\2?:" 00:12:26.998 }' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:26.998 { 00:12:26.998 "nqn": "nqn.2016-06.io.spdk:cnode12190", 00:12:26.998 "serial_number": "W+/\"M%&@Ou3/#V&=0\\2?:", 00:12:26.998 "method": "nvmf_create_subsystem", 00:12:26.998 "req_id": 1 00:12:26.998 } 00:12:26.998 Got JSON-RPC error response 00:12:26.998 response: 00:12:26.998 { 00:12:26.998 "code": -32602, 00:12:26.998 "message": "Invalid SN W+/\"M%&@Ou3/#V&=0\\2?:" 00:12:26.998 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:26.998 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ . == \- ]] 00:12:26.999 18:42:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '..Z{7qo>1T9Ec{R3!1T9Ec{R3!1T9Ec{R3!1T9Ec{R3!1T9Ec{R3! /dev/null' 00:12:29.860 18:42:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.757 18:42:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:31.757 00:12:31.757 real 0m8.679s 00:12:31.757 user 0m20.247s 00:12:31.757 sys 0m2.437s 00:12:31.757 18:42:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:31.757 18:42:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:31.757 ************************************ 00:12:31.757 END TEST nvmf_invalid 00:12:31.757 ************************************ 00:12:31.757 18:42:42 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:31.757 18:42:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:31.757 18:42:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:31.757 18:42:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:32.028 ************************************ 00:12:32.028 START TEST nvmf_abort 00:12:32.028 ************************************ 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:32.028 * Looking for test storage... 00:12:32.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:32.028 18:42:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:33.925 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:33.925 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:33.925 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:33.925 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.925 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:34.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:12:34.183 00:12:34.183 --- 10.0.0.2 ping statistics --- 00:12:34.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.183 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:12:34.183 00:12:34.183 --- 10.0.0.1 ping statistics --- 00:12:34.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.183 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1324923 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1324923 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 1324923 ']' 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:34.183 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:34.183 [2024-07-20 18:42:44.388056] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:34.184 [2024-07-20 18:42:44.388150] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.184 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.184 [2024-07-20 18:42:44.457613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:34.442 [2024-07-20 18:42:44.552858] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.442 [2024-07-20 18:42:44.552921] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.442 [2024-07-20 18:42:44.552950] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.442 [2024-07-20 18:42:44.552964] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.442 [2024-07-20 18:42:44.552975] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.442 [2024-07-20 18:42:44.553060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.442 [2024-07-20 18:42:44.553118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.442 [2024-07-20 18:42:44.553121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:34.442 [2024-07-20 18:42:44.696710] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:34.442 Malloc0 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:34.442 Delay0 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.442 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:34.699 [2024-07-20 18:42:44.767967] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.699 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.699 18:42:44 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:34.699 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.699 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:34.699 18:42:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.699 18:42:44 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:34.699 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.699 [2024-07-20 18:42:44.875508] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:37.226 Initializing NVMe Controllers 00:12:37.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:37.226 controller IO queue size 128 less than required 00:12:37.226 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:37.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:37.226 Initialization complete. Launching workers. 00:12:37.226 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33294 00:12:37.226 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33355, failed to submit 62 00:12:37.226 success 33298, unsuccess 57, failed 0 00:12:37.226 18:42:46 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:37.226 18:42:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.226 18:42:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:37.226 18:42:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.226 18:42:46 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:37.226 18:42:46 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:37.226 18:42:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:37.226 18:42:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:37.226 18:42:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:37.226 18:42:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:37.226 18:42:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:37.226 18:42:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:37.226 rmmod nvme_tcp 00:12:37.226 rmmod nvme_fabrics 00:12:37.226 rmmod nvme_keyring 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1324923 ']' 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1324923 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 1324923 ']' 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 1324923 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1324923 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1324923' 00:12:37.226 killing process with pid 1324923 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 1324923 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 1324923 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.226 18:42:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.130 18:42:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:39.130 00:12:39.130 real 0m7.246s 00:12:39.130 user 0m10.419s 00:12:39.130 sys 0m2.515s 00:12:39.130 18:42:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:39.130 18:42:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:39.130 ************************************ 00:12:39.130 END TEST nvmf_abort 00:12:39.130 ************************************ 00:12:39.130 18:42:49 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:39.130 18:42:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:39.130 18:42:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:39.130 18:42:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:39.130 ************************************ 00:12:39.130 START TEST nvmf_ns_hotplug_stress 00:12:39.130 ************************************ 00:12:39.130 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:39.130 * Looking for test storage... 00:12:39.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.130 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.130 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:39.130 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.130 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.131 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.131 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.131 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.131 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.131 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.131 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.131 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.131 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:39.390 18:42:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:41.290 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:41.290 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:41.290 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.290 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:41.291 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:41.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:12:41.291 00:12:41.291 --- 10.0.0.2 ping statistics --- 00:12:41.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.291 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:12:41.291 00:12:41.291 --- 10.0.0.1 ping statistics --- 00:12:41.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.291 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1327260 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1327260 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 1327260 ']' 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:41.291 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.550 [2024-07-20 18:42:51.623258] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:41.550 [2024-07-20 18:42:51.623356] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.550 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.550 [2024-07-20 18:42:51.693171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:41.550 [2024-07-20 18:42:51.786178] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.550 [2024-07-20 18:42:51.786239] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.550 [2024-07-20 18:42:51.786265] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.550 [2024-07-20 18:42:51.786280] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.550 [2024-07-20 18:42:51.786292] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.550 [2024-07-20 18:42:51.786381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.550 [2024-07-20 18:42:51.786435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.550 [2024-07-20 18:42:51.786438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.807 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:41.807 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:12:41.807 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:41.807 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:41.807 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.807 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.807 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:41.807 18:42:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:42.065 [2024-07-20 18:42:52.162330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.065 18:42:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:42.322 18:42:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.579 [2024-07-20 18:42:52.721413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.580 18:42:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:42.836 18:42:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:43.093 Malloc0 00:12:43.093 18:42:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:43.349 Delay0 00:12:43.349 18:42:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.607 18:42:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:43.874 NULL1 00:12:43.874 18:42:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:44.131 18:42:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1327561 00:12:44.131 18:42:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:44.131 18:42:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:12:44.131 18:42:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.131 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.503 Read completed with error (sct=0, sc=11) 00:12:45.503 18:42:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:45.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:45.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:45.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:45.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:45.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:45.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:45.503 18:42:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:45.503 18:42:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:45.760 true 00:12:45.760 18:42:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:12:45.761 18:42:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.693 18:42:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.949 18:42:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:46.949 18:42:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:47.206 true 00:12:47.206 18:42:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:12:47.206 18:42:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.463 18:42:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.463 18:42:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:47.463 18:42:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:47.719 true 00:12:47.719 18:42:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:12:47.719 18:42:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.976 18:42:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.233 18:42:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:48.233 18:42:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:48.489 true 00:12:48.489 18:42:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:12:48.489 18:42:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.861 18:42:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.861 18:43:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:49.861 18:43:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:50.118 true 00:12:50.376 18:43:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:12:50.376 18:43:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.376 18:43:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.633 18:43:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:50.633 18:43:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:50.891 true 00:12:50.891 18:43:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:12:50.891 18:43:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.826 18:43:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:52.083 18:43:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:52.083 18:43:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:52.341 true 00:12:52.341 18:43:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:12:52.341 18:43:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.598 18:43:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.855 18:43:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:52.855 18:43:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:53.113 true 00:12:53.113 18:43:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:12:53.113 18:43:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:54.042 18:43:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:54.298 18:43:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:54.298 18:43:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:54.555 true 00:12:54.555 18:43:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:12:54.555 18:43:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.813 18:43:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.071 18:43:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:55.071 18:43:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:55.330 true 00:12:55.330 18:43:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:12:55.330 18:43:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.262 18:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.519 18:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:56.519 18:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:56.807 true 00:12:56.807 18:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:12:56.807 18:43:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.064 18:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.321 18:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:57.321 18:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:57.321 true 00:12:57.578 18:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:12:57.578 18:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.579 18:43:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.883 18:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:57.883 18:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:58.140 true 00:12:58.140 18:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:12:58.140 18:43:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.515 18:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.515 18:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:59.515 18:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:59.774 true 00:12:59.774 18:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:12:59.774 18:43:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.707 18:43:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.707 18:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:00.707 18:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:00.967 true 00:13:00.967 18:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:13:00.967 18:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.223 18:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.480 18:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:01.480 18:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:01.739 true 00:13:01.739 18:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:13:01.739 18:43:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.673 18:43:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.929 18:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:02.929 18:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:03.185 true 00:13:03.185 18:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:13:03.185 18:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.441 18:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.441 18:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:03.441 18:43:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:03.699 true 00:13:03.957 18:43:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:13:03.957 18:43:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.886 18:43:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.143 18:43:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:05.143 18:43:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:05.401 true 00:13:05.401 18:43:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:13:05.401 18:43:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.658 18:43:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.916 18:43:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:05.916 18:43:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:06.172 true 00:13:06.173 18:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:13:06.173 18:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.430 18:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.430 18:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:06.430 18:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:06.686 true 00:13:06.686 18:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:13:06.686 18:43:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.061 18:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.061 18:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:08.061 18:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:08.319 true 00:13:08.319 18:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:13:08.319 18:43:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.286 18:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.286 18:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:09.286 18:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:09.544 true 00:13:09.544 18:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:13:09.544 18:43:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.802 18:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.061 18:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:10.061 18:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:10.320 true 00:13:10.320 18:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:13:10.320 18:43:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.256 18:43:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.256 18:43:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:11.256 18:43:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:11.514 true 00:13:11.514 18:43:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:13:11.514 18:43:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.079 18:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.079 18:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:12.079 18:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:12.336 true 00:13:12.336 18:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:13:12.336 18:43:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.267 18:43:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.524 18:43:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:13.524 18:43:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:13.781 true 00:13:13.781 18:43:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:13:13.781 18:43:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.038 18:43:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.295 18:43:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:14.295 18:43:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:14.552 true 00:13:14.552 18:43:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:13:14.552 18:43:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.484 18:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.484 Initializing NVMe Controllers 00:13:15.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:15.484 Controller IO queue size 128, less than required. 00:13:15.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:15.484 Controller IO queue size 128, less than required. 00:13:15.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:15.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:15.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:15.484 Initialization complete. Launching workers. 00:13:15.484 ======================================================== 00:13:15.484 Latency(us) 00:13:15.484 Device Information : IOPS MiB/s Average min max 00:13:15.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1323.93 0.65 55005.92 2001.69 1011965.23 00:13:15.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10984.80 5.36 11652.20 3189.37 453080.61 00:13:15.484 ======================================================== 00:13:15.484 Total : 12308.73 6.01 16315.34 2001.69 1011965.23 00:13:15.484 00:13:15.484 18:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:15.484 18:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:15.741 true 00:13:15.741 18:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327561 00:13:15.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1327561) - No such process 00:13:15.741 18:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1327561 00:13:15.741 18:43:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.998 18:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:16.255 18:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:16.255 18:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:16.255 18:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:16.255 18:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:16.255 18:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:16.511 null0 00:13:16.511 18:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:16.511 18:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:16.512 18:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:16.768 null1 00:13:16.768 18:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:16.768 18:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:16.768 18:43:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:17.025 null2 00:13:17.025 18:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:17.025 18:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:17.025 18:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:17.283 null3 00:13:17.283 18:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:17.283 18:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:17.283 18:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:17.541 null4 00:13:17.541 18:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:17.541 18:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:17.541 18:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:17.799 null5 00:13:17.799 18:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:17.799 18:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:17.799 18:43:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:18.057 null6 00:13:18.057 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:18.057 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:18.057 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:18.315 null7 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.315 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1331729 1331730 1331732 1331734 1331736 1331738 1331740 1331742 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.316 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:18.574 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:18.574 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.574 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:18.574 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.574 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:18.574 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:18.574 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:18.574 18:43:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.833 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:19.091 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:19.091 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:19.091 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.091 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:19.091 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.091 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:19.091 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:19.091 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.349 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:19.605 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:19.605 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:19.605 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.605 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:19.605 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.605 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:19.605 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:19.605 18:43:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.863 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:20.121 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:20.121 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.121 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:20.121 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:20.121 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:20.121 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:20.121 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:20.121 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.379 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:20.637 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:20.637 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:20.637 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:20.637 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.637 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:20.637 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:20.637 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:20.637 18:43:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:20.895 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:21.158 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.158 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.158 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:21.158 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:21.158 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:21.158 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:21.416 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:21.416 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:21.416 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.416 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.416 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:21.673 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.673 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.673 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:21.673 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.673 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.673 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:21.673 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.673 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.673 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:21.673 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.673 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.674 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:21.674 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.674 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.674 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.674 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:21.674 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.674 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:21.674 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.674 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.674 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:21.674 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.674 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.674 18:43:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:21.931 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:21.931 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:21.931 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:21.931 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:21.931 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.931 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:21.931 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:21.931 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.188 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:22.445 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:22.445 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:22.445 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:22.445 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:22.445 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:22.445 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:22.445 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.445 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.703 18:43:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:22.960 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:22.960 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:22.960 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:22.960 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:22.960 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:22.960 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:22.960 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.960 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.217 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:23.474 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:23.474 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:23.474 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:23.474 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:23.474 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:23.474 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.474 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.474 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.731 rmmod nvme_tcp 00:13:23.731 rmmod nvme_fabrics 00:13:23.731 rmmod nvme_keyring 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1327260 ']' 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1327260 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 1327260 ']' 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 1327260 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:23.731 18:43:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1327260 00:13:23.731 18:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:23.731 18:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:23.731 18:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1327260' 00:13:23.731 killing process with pid 1327260 00:13:23.731 18:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 1327260 00:13:23.731 18:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 1327260 00:13:23.988 18:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:23.988 18:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:23.988 18:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:23.988 18:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.988 18:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.988 18:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.988 18:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.988 18:43:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.513 18:43:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:26.513 00:13:26.513 real 0m46.919s 00:13:26.513 user 3m19.382s 00:13:26.513 sys 0m19.828s 00:13:26.513 18:43:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:26.513 18:43:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.513 ************************************ 00:13:26.513 END TEST nvmf_ns_hotplug_stress 00:13:26.513 ************************************ 00:13:26.513 18:43:36 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:26.513 18:43:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:26.513 18:43:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:26.513 18:43:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:26.513 ************************************ 00:13:26.513 START TEST nvmf_connect_stress 00:13:26.513 ************************************ 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:26.513 * Looking for test storage... 00:13:26.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:26.513 18:43:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:28.413 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:28.413 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:28.413 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:28.414 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:28.414 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:28.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:13:28.414 00:13:28.414 --- 10.0.0.2 ping statistics --- 00:13:28.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.414 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:13:28.414 00:13:28.414 --- 10.0.0.1 ping statistics --- 00:13:28.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.414 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1334489 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1334489 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 1334489 ']' 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:28.414 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.414 [2024-07-20 18:43:38.619746] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:28.414 [2024-07-20 18:43:38.619863] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.414 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.414 [2024-07-20 18:43:38.689614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:28.672 [2024-07-20 18:43:38.779828] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.672 [2024-07-20 18:43:38.779893] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.672 [2024-07-20 18:43:38.779919] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.672 [2024-07-20 18:43:38.779933] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.672 [2024-07-20 18:43:38.779945] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.672 [2024-07-20 18:43:38.780031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.672 [2024-07-20 18:43:38.780148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.672 [2024-07-20 18:43:38.780150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.672 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:28.672 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:13:28.672 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:28.672 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:28.672 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.672 18:43:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.672 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:28.672 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.672 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.672 [2024-07-20 18:43:38.928672] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.672 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.672 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:28.672 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.672 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.672 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.672 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.673 [2024-07-20 18:43:38.960969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.673 NULL1 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1334515 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.673 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.930 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.930 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.930 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.930 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.930 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.930 18:43:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.930 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.930 18:43:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.188 18:43:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.188 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:29.188 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.188 18:43:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.188 18:43:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.445 18:43:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.445 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:29.445 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.445 18:43:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.445 18:43:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.703 18:43:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.703 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:29.703 18:43:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.703 18:43:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.703 18:43:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.267 18:43:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.267 18:43:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:30.267 18:43:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.267 18:43:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.267 18:43:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.525 18:43:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.525 18:43:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:30.525 18:43:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.525 18:43:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.525 18:43:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.783 18:43:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.783 18:43:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:30.783 18:43:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.783 18:43:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.783 18:43:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.040 18:43:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.040 18:43:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:31.040 18:43:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.040 18:43:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.040 18:43:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.298 18:43:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.298 18:43:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:31.298 18:43:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.298 18:43:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.298 18:43:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.864 18:43:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.864 18:43:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:31.864 18:43:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.864 18:43:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.864 18:43:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.122 18:43:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.122 18:43:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:32.122 18:43:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.122 18:43:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.122 18:43:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.380 18:43:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.380 18:43:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:32.380 18:43:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.380 18:43:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.380 18:43:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.638 18:43:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.638 18:43:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:32.638 18:43:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.638 18:43:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.638 18:43:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.894 18:43:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.894 18:43:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:32.894 18:43:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.894 18:43:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.894 18:43:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.458 18:43:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.458 18:43:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:33.458 18:43:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.458 18:43:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.458 18:43:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.715 18:43:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.715 18:43:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:33.715 18:43:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.715 18:43:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.715 18:43:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.972 18:43:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.972 18:43:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:33.972 18:43:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.972 18:43:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.972 18:43:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.257 18:43:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.257 18:43:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:34.257 18:43:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.257 18:43:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.257 18:43:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.514 18:43:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.514 18:43:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:34.514 18:43:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.514 18:43:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.514 18:43:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.097 18:43:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.097 18:43:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:35.097 18:43:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.097 18:43:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.097 18:43:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.354 18:43:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.354 18:43:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:35.354 18:43:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.354 18:43:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.354 18:43:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.611 18:43:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.611 18:43:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:35.611 18:43:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.611 18:43:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.611 18:43:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.868 18:43:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.868 18:43:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:35.868 18:43:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.868 18:43:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.868 18:43:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.125 18:43:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.125 18:43:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:36.125 18:43:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.125 18:43:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.125 18:43:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.689 18:43:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.690 18:43:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:36.690 18:43:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.690 18:43:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.690 18:43:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.948 18:43:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.948 18:43:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:36.948 18:43:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.948 18:43:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.948 18:43:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.205 18:43:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.205 18:43:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:37.205 18:43:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.205 18:43:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.205 18:43:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.463 18:43:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.463 18:43:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:37.463 18:43:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.463 18:43:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.463 18:43:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.720 18:43:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.720 18:43:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:37.720 18:43:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.721 18:43:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.721 18:43:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.283 18:43:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.283 18:43:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:38.283 18:43:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.283 18:43:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.283 18:43:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.539 18:43:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.539 18:43:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:38.539 18:43:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.539 18:43:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.539 18:43:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.797 18:43:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.797 18:43:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:38.797 18:43:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.797 18:43:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.797 18:43:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.053 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1334515 00:13:39.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1334515) - No such process 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1334515 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:39.053 rmmod nvme_tcp 00:13:39.053 rmmod nvme_fabrics 00:13:39.053 rmmod nvme_keyring 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1334489 ']' 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1334489 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 1334489 ']' 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 1334489 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:39.053 18:43:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1334489 00:13:39.311 18:43:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:39.311 18:43:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:39.311 18:43:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1334489' 00:13:39.311 killing process with pid 1334489 00:13:39.311 18:43:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 1334489 00:13:39.311 18:43:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 1334489 00:13:39.311 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:39.311 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:39.311 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:39.311 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.311 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:39.311 18:43:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.311 18:43:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.311 18:43:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.836 18:43:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:41.836 00:13:41.836 real 0m15.312s 00:13:41.836 user 0m37.926s 00:13:41.836 sys 0m6.204s 00:13:41.836 18:43:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:41.836 18:43:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.836 ************************************ 00:13:41.836 END TEST nvmf_connect_stress 00:13:41.836 ************************************ 00:13:41.836 18:43:51 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:41.836 18:43:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:41.836 18:43:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:41.836 18:43:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.836 ************************************ 00:13:41.836 START TEST nvmf_fused_ordering 00:13:41.836 ************************************ 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:41.836 * Looking for test storage... 00:13:41.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:41.836 18:43:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:43.733 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:43.733 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:43.733 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:43.733 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.733 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:13:43.733 00:13:43.733 --- 10.0.0.2 ping statistics --- 00:13:43.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.733 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:13:43.734 00:13:43.734 --- 10.0.0.1 ping statistics --- 00:13:43.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.734 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1337657 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1337657 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 1337657 ']' 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:43.734 18:43:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.734 [2024-07-20 18:43:53.928216] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:43.734 [2024-07-20 18:43:53.928309] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.734 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.734 [2024-07-20 18:43:53.993925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.992 [2024-07-20 18:43:54.083892] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.992 [2024-07-20 18:43:54.083946] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.992 [2024-07-20 18:43:54.083974] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.992 [2024-07-20 18:43:54.083986] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.992 [2024-07-20 18:43:54.083996] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.992 [2024-07-20 18:43:54.084023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.992 [2024-07-20 18:43:54.228778] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.992 [2024-07-20 18:43:54.244986] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.992 NULL1 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.992 18:43:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:43.992 [2024-07-20 18:43:54.289022] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:43.992 [2024-07-20 18:43:54.289060] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337699 ] 00:13:44.249 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.179 Attached to nqn.2016-06.io.spdk:cnode1 00:13:45.179 Namespace ID: 1 size: 1GB 00:13:45.179 fused_ordering(0) 00:13:45.179 fused_ordering(1) 00:13:45.179 fused_ordering(2) 00:13:45.179 fused_ordering(3) 00:13:45.179 fused_ordering(4) 00:13:45.179 fused_ordering(5) 00:13:45.179 fused_ordering(6) 00:13:45.179 fused_ordering(7) 00:13:45.179 fused_ordering(8) 00:13:45.179 fused_ordering(9) 00:13:45.179 fused_ordering(10) 00:13:45.179 fused_ordering(11) 00:13:45.179 fused_ordering(12) 00:13:45.179 fused_ordering(13) 00:13:45.179 fused_ordering(14) 00:13:45.179 fused_ordering(15) 00:13:45.179 fused_ordering(16) 00:13:45.179 fused_ordering(17) 00:13:45.179 fused_ordering(18) 00:13:45.179 fused_ordering(19) 00:13:45.179 fused_ordering(20) 00:13:45.179 fused_ordering(21) 00:13:45.179 fused_ordering(22) 00:13:45.179 fused_ordering(23) 00:13:45.179 fused_ordering(24) 00:13:45.179 fused_ordering(25) 00:13:45.179 fused_ordering(26) 00:13:45.179 fused_ordering(27) 00:13:45.179 fused_ordering(28) 00:13:45.179 fused_ordering(29) 00:13:45.179 fused_ordering(30) 00:13:45.179 fused_ordering(31) 00:13:45.179 fused_ordering(32) 00:13:45.179 fused_ordering(33) 00:13:45.179 fused_ordering(34) 00:13:45.179 fused_ordering(35) 00:13:45.179 fused_ordering(36) 00:13:45.179 fused_ordering(37) 00:13:45.179 fused_ordering(38) 00:13:45.179 fused_ordering(39) 00:13:45.179 fused_ordering(40) 00:13:45.179 fused_ordering(41) 00:13:45.179 fused_ordering(42) 00:13:45.179 fused_ordering(43) 00:13:45.179 fused_ordering(44) 00:13:45.179 fused_ordering(45) 00:13:45.179 fused_ordering(46) 00:13:45.179 fused_ordering(47) 00:13:45.179 fused_ordering(48) 00:13:45.179 fused_ordering(49) 00:13:45.179 fused_ordering(50) 00:13:45.179 fused_ordering(51) 00:13:45.179 fused_ordering(52) 00:13:45.179 fused_ordering(53) 00:13:45.179 fused_ordering(54) 00:13:45.179 fused_ordering(55) 00:13:45.179 fused_ordering(56) 00:13:45.179 fused_ordering(57) 00:13:45.179 fused_ordering(58) 00:13:45.179 fused_ordering(59) 00:13:45.179 fused_ordering(60) 00:13:45.179 fused_ordering(61) 00:13:45.179 fused_ordering(62) 00:13:45.179 fused_ordering(63) 00:13:45.179 fused_ordering(64) 00:13:45.179 fused_ordering(65) 00:13:45.179 fused_ordering(66) 00:13:45.180 fused_ordering(67) 00:13:45.180 fused_ordering(68) 00:13:45.180 fused_ordering(69) 00:13:45.180 fused_ordering(70) 00:13:45.180 fused_ordering(71) 00:13:45.180 fused_ordering(72) 00:13:45.180 fused_ordering(73) 00:13:45.180 fused_ordering(74) 00:13:45.180 fused_ordering(75) 00:13:45.180 fused_ordering(76) 00:13:45.180 fused_ordering(77) 00:13:45.180 fused_ordering(78) 00:13:45.180 fused_ordering(79) 00:13:45.180 fused_ordering(80) 00:13:45.180 fused_ordering(81) 00:13:45.180 fused_ordering(82) 00:13:45.180 fused_ordering(83) 00:13:45.180 fused_ordering(84) 00:13:45.180 fused_ordering(85) 00:13:45.180 fused_ordering(86) 00:13:45.180 fused_ordering(87) 00:13:45.180 fused_ordering(88) 00:13:45.180 fused_ordering(89) 00:13:45.180 fused_ordering(90) 00:13:45.180 fused_ordering(91) 00:13:45.180 fused_ordering(92) 00:13:45.180 fused_ordering(93) 00:13:45.180 fused_ordering(94) 00:13:45.180 fused_ordering(95) 00:13:45.180 fused_ordering(96) 00:13:45.180 fused_ordering(97) 00:13:45.180 fused_ordering(98) 00:13:45.180 fused_ordering(99) 00:13:45.180 fused_ordering(100) 00:13:45.180 fused_ordering(101) 00:13:45.180 fused_ordering(102) 00:13:45.180 fused_ordering(103) 00:13:45.180 fused_ordering(104) 00:13:45.180 fused_ordering(105) 00:13:45.180 fused_ordering(106) 00:13:45.180 fused_ordering(107) 00:13:45.180 fused_ordering(108) 00:13:45.180 fused_ordering(109) 00:13:45.180 fused_ordering(110) 00:13:45.180 fused_ordering(111) 00:13:45.180 fused_ordering(112) 00:13:45.180 fused_ordering(113) 00:13:45.180 fused_ordering(114) 00:13:45.180 fused_ordering(115) 00:13:45.180 fused_ordering(116) 00:13:45.180 fused_ordering(117) 00:13:45.180 fused_ordering(118) 00:13:45.180 fused_ordering(119) 00:13:45.180 fused_ordering(120) 00:13:45.180 fused_ordering(121) 00:13:45.180 fused_ordering(122) 00:13:45.180 fused_ordering(123) 00:13:45.180 fused_ordering(124) 00:13:45.180 fused_ordering(125) 00:13:45.180 fused_ordering(126) 00:13:45.180 fused_ordering(127) 00:13:45.180 fused_ordering(128) 00:13:45.180 fused_ordering(129) 00:13:45.180 fused_ordering(130) 00:13:45.180 fused_ordering(131) 00:13:45.180 fused_ordering(132) 00:13:45.180 fused_ordering(133) 00:13:45.180 fused_ordering(134) 00:13:45.180 fused_ordering(135) 00:13:45.180 fused_ordering(136) 00:13:45.180 fused_ordering(137) 00:13:45.180 fused_ordering(138) 00:13:45.180 fused_ordering(139) 00:13:45.180 fused_ordering(140) 00:13:45.180 fused_ordering(141) 00:13:45.180 fused_ordering(142) 00:13:45.180 fused_ordering(143) 00:13:45.180 fused_ordering(144) 00:13:45.180 fused_ordering(145) 00:13:45.180 fused_ordering(146) 00:13:45.180 fused_ordering(147) 00:13:45.180 fused_ordering(148) 00:13:45.180 fused_ordering(149) 00:13:45.180 fused_ordering(150) 00:13:45.180 fused_ordering(151) 00:13:45.180 fused_ordering(152) 00:13:45.180 fused_ordering(153) 00:13:45.180 fused_ordering(154) 00:13:45.180 fused_ordering(155) 00:13:45.180 fused_ordering(156) 00:13:45.180 fused_ordering(157) 00:13:45.180 fused_ordering(158) 00:13:45.180 fused_ordering(159) 00:13:45.180 fused_ordering(160) 00:13:45.180 fused_ordering(161) 00:13:45.180 fused_ordering(162) 00:13:45.180 fused_ordering(163) 00:13:45.180 fused_ordering(164) 00:13:45.180 fused_ordering(165) 00:13:45.180 fused_ordering(166) 00:13:45.180 fused_ordering(167) 00:13:45.180 fused_ordering(168) 00:13:45.180 fused_ordering(169) 00:13:45.180 fused_ordering(170) 00:13:45.180 fused_ordering(171) 00:13:45.180 fused_ordering(172) 00:13:45.180 fused_ordering(173) 00:13:45.180 fused_ordering(174) 00:13:45.180 fused_ordering(175) 00:13:45.180 fused_ordering(176) 00:13:45.180 fused_ordering(177) 00:13:45.180 fused_ordering(178) 00:13:45.180 fused_ordering(179) 00:13:45.180 fused_ordering(180) 00:13:45.180 fused_ordering(181) 00:13:45.180 fused_ordering(182) 00:13:45.180 fused_ordering(183) 00:13:45.180 fused_ordering(184) 00:13:45.180 fused_ordering(185) 00:13:45.180 fused_ordering(186) 00:13:45.180 fused_ordering(187) 00:13:45.180 fused_ordering(188) 00:13:45.180 fused_ordering(189) 00:13:45.180 fused_ordering(190) 00:13:45.180 fused_ordering(191) 00:13:45.180 fused_ordering(192) 00:13:45.180 fused_ordering(193) 00:13:45.180 fused_ordering(194) 00:13:45.180 fused_ordering(195) 00:13:45.180 fused_ordering(196) 00:13:45.180 fused_ordering(197) 00:13:45.180 fused_ordering(198) 00:13:45.180 fused_ordering(199) 00:13:45.180 fused_ordering(200) 00:13:45.180 fused_ordering(201) 00:13:45.180 fused_ordering(202) 00:13:45.180 fused_ordering(203) 00:13:45.180 fused_ordering(204) 00:13:45.180 fused_ordering(205) 00:13:46.110 fused_ordering(206) 00:13:46.110 fused_ordering(207) 00:13:46.110 fused_ordering(208) 00:13:46.110 fused_ordering(209) 00:13:46.110 fused_ordering(210) 00:13:46.110 fused_ordering(211) 00:13:46.110 fused_ordering(212) 00:13:46.110 fused_ordering(213) 00:13:46.110 fused_ordering(214) 00:13:46.110 fused_ordering(215) 00:13:46.110 fused_ordering(216) 00:13:46.110 fused_ordering(217) 00:13:46.110 fused_ordering(218) 00:13:46.110 fused_ordering(219) 00:13:46.110 fused_ordering(220) 00:13:46.110 fused_ordering(221) 00:13:46.110 fused_ordering(222) 00:13:46.110 fused_ordering(223) 00:13:46.110 fused_ordering(224) 00:13:46.110 fused_ordering(225) 00:13:46.110 fused_ordering(226) 00:13:46.110 fused_ordering(227) 00:13:46.110 fused_ordering(228) 00:13:46.110 fused_ordering(229) 00:13:46.110 fused_ordering(230) 00:13:46.110 fused_ordering(231) 00:13:46.110 fused_ordering(232) 00:13:46.110 fused_ordering(233) 00:13:46.110 fused_ordering(234) 00:13:46.110 fused_ordering(235) 00:13:46.110 fused_ordering(236) 00:13:46.110 fused_ordering(237) 00:13:46.110 fused_ordering(238) 00:13:46.110 fused_ordering(239) 00:13:46.110 fused_ordering(240) 00:13:46.110 fused_ordering(241) 00:13:46.110 fused_ordering(242) 00:13:46.110 fused_ordering(243) 00:13:46.110 fused_ordering(244) 00:13:46.110 fused_ordering(245) 00:13:46.110 fused_ordering(246) 00:13:46.110 fused_ordering(247) 00:13:46.110 fused_ordering(248) 00:13:46.110 fused_ordering(249) 00:13:46.110 fused_ordering(250) 00:13:46.110 fused_ordering(251) 00:13:46.110 fused_ordering(252) 00:13:46.110 fused_ordering(253) 00:13:46.110 fused_ordering(254) 00:13:46.110 fused_ordering(255) 00:13:46.110 fused_ordering(256) 00:13:46.110 fused_ordering(257) 00:13:46.110 fused_ordering(258) 00:13:46.110 fused_ordering(259) 00:13:46.110 fused_ordering(260) 00:13:46.111 fused_ordering(261) 00:13:46.111 fused_ordering(262) 00:13:46.111 fused_ordering(263) 00:13:46.111 fused_ordering(264) 00:13:46.111 fused_ordering(265) 00:13:46.111 fused_ordering(266) 00:13:46.111 fused_ordering(267) 00:13:46.111 fused_ordering(268) 00:13:46.111 fused_ordering(269) 00:13:46.111 fused_ordering(270) 00:13:46.111 fused_ordering(271) 00:13:46.111 fused_ordering(272) 00:13:46.111 fused_ordering(273) 00:13:46.111 fused_ordering(274) 00:13:46.111 fused_ordering(275) 00:13:46.111 fused_ordering(276) 00:13:46.111 fused_ordering(277) 00:13:46.111 fused_ordering(278) 00:13:46.111 fused_ordering(279) 00:13:46.111 fused_ordering(280) 00:13:46.111 fused_ordering(281) 00:13:46.111 fused_ordering(282) 00:13:46.111 fused_ordering(283) 00:13:46.111 fused_ordering(284) 00:13:46.111 fused_ordering(285) 00:13:46.111 fused_ordering(286) 00:13:46.111 fused_ordering(287) 00:13:46.111 fused_ordering(288) 00:13:46.111 fused_ordering(289) 00:13:46.111 fused_ordering(290) 00:13:46.111 fused_ordering(291) 00:13:46.111 fused_ordering(292) 00:13:46.111 fused_ordering(293) 00:13:46.111 fused_ordering(294) 00:13:46.111 fused_ordering(295) 00:13:46.111 fused_ordering(296) 00:13:46.111 fused_ordering(297) 00:13:46.111 fused_ordering(298) 00:13:46.111 fused_ordering(299) 00:13:46.111 fused_ordering(300) 00:13:46.111 fused_ordering(301) 00:13:46.111 fused_ordering(302) 00:13:46.111 fused_ordering(303) 00:13:46.111 fused_ordering(304) 00:13:46.111 fused_ordering(305) 00:13:46.111 fused_ordering(306) 00:13:46.111 fused_ordering(307) 00:13:46.111 fused_ordering(308) 00:13:46.111 fused_ordering(309) 00:13:46.111 fused_ordering(310) 00:13:46.111 fused_ordering(311) 00:13:46.111 fused_ordering(312) 00:13:46.111 fused_ordering(313) 00:13:46.111 fused_ordering(314) 00:13:46.111 fused_ordering(315) 00:13:46.111 fused_ordering(316) 00:13:46.111 fused_ordering(317) 00:13:46.111 fused_ordering(318) 00:13:46.111 fused_ordering(319) 00:13:46.111 fused_ordering(320) 00:13:46.111 fused_ordering(321) 00:13:46.111 fused_ordering(322) 00:13:46.111 fused_ordering(323) 00:13:46.111 fused_ordering(324) 00:13:46.111 fused_ordering(325) 00:13:46.111 fused_ordering(326) 00:13:46.111 fused_ordering(327) 00:13:46.111 fused_ordering(328) 00:13:46.111 fused_ordering(329) 00:13:46.111 fused_ordering(330) 00:13:46.111 fused_ordering(331) 00:13:46.111 fused_ordering(332) 00:13:46.111 fused_ordering(333) 00:13:46.111 fused_ordering(334) 00:13:46.111 fused_ordering(335) 00:13:46.111 fused_ordering(336) 00:13:46.111 fused_ordering(337) 00:13:46.111 fused_ordering(338) 00:13:46.111 fused_ordering(339) 00:13:46.111 fused_ordering(340) 00:13:46.111 fused_ordering(341) 00:13:46.111 fused_ordering(342) 00:13:46.111 fused_ordering(343) 00:13:46.111 fused_ordering(344) 00:13:46.111 fused_ordering(345) 00:13:46.111 fused_ordering(346) 00:13:46.111 fused_ordering(347) 00:13:46.111 fused_ordering(348) 00:13:46.111 fused_ordering(349) 00:13:46.111 fused_ordering(350) 00:13:46.111 fused_ordering(351) 00:13:46.111 fused_ordering(352) 00:13:46.111 fused_ordering(353) 00:13:46.111 fused_ordering(354) 00:13:46.111 fused_ordering(355) 00:13:46.111 fused_ordering(356) 00:13:46.111 fused_ordering(357) 00:13:46.111 fused_ordering(358) 00:13:46.111 fused_ordering(359) 00:13:46.111 fused_ordering(360) 00:13:46.111 fused_ordering(361) 00:13:46.111 fused_ordering(362) 00:13:46.111 fused_ordering(363) 00:13:46.111 fused_ordering(364) 00:13:46.111 fused_ordering(365) 00:13:46.111 fused_ordering(366) 00:13:46.111 fused_ordering(367) 00:13:46.111 fused_ordering(368) 00:13:46.111 fused_ordering(369) 00:13:46.111 fused_ordering(370) 00:13:46.111 fused_ordering(371) 00:13:46.111 fused_ordering(372) 00:13:46.111 fused_ordering(373) 00:13:46.111 fused_ordering(374) 00:13:46.111 fused_ordering(375) 00:13:46.111 fused_ordering(376) 00:13:46.111 fused_ordering(377) 00:13:46.111 fused_ordering(378) 00:13:46.111 fused_ordering(379) 00:13:46.111 fused_ordering(380) 00:13:46.111 fused_ordering(381) 00:13:46.111 fused_ordering(382) 00:13:46.111 fused_ordering(383) 00:13:46.111 fused_ordering(384) 00:13:46.111 fused_ordering(385) 00:13:46.111 fused_ordering(386) 00:13:46.111 fused_ordering(387) 00:13:46.111 fused_ordering(388) 00:13:46.111 fused_ordering(389) 00:13:46.111 fused_ordering(390) 00:13:46.111 fused_ordering(391) 00:13:46.111 fused_ordering(392) 00:13:46.111 fused_ordering(393) 00:13:46.111 fused_ordering(394) 00:13:46.111 fused_ordering(395) 00:13:46.111 fused_ordering(396) 00:13:46.111 fused_ordering(397) 00:13:46.111 fused_ordering(398) 00:13:46.111 fused_ordering(399) 00:13:46.111 fused_ordering(400) 00:13:46.111 fused_ordering(401) 00:13:46.111 fused_ordering(402) 00:13:46.111 fused_ordering(403) 00:13:46.111 fused_ordering(404) 00:13:46.111 fused_ordering(405) 00:13:46.111 fused_ordering(406) 00:13:46.111 fused_ordering(407) 00:13:46.111 fused_ordering(408) 00:13:46.111 fused_ordering(409) 00:13:46.111 fused_ordering(410) 00:13:47.477 fused_ordering(411) 00:13:47.478 fused_ordering(412) 00:13:47.478 fused_ordering(413) 00:13:47.478 fused_ordering(414) 00:13:47.478 fused_ordering(415) 00:13:47.478 fused_ordering(416) 00:13:47.478 fused_ordering(417) 00:13:47.478 fused_ordering(418) 00:13:47.478 fused_ordering(419) 00:13:47.478 fused_ordering(420) 00:13:47.478 fused_ordering(421) 00:13:47.478 fused_ordering(422) 00:13:47.478 fused_ordering(423) 00:13:47.478 fused_ordering(424) 00:13:47.478 fused_ordering(425) 00:13:47.478 fused_ordering(426) 00:13:47.478 fused_ordering(427) 00:13:47.478 fused_ordering(428) 00:13:47.478 fused_ordering(429) 00:13:47.478 fused_ordering(430) 00:13:47.478 fused_ordering(431) 00:13:47.478 fused_ordering(432) 00:13:47.478 fused_ordering(433) 00:13:47.478 fused_ordering(434) 00:13:47.478 fused_ordering(435) 00:13:47.478 fused_ordering(436) 00:13:47.478 fused_ordering(437) 00:13:47.478 fused_ordering(438) 00:13:47.478 fused_ordering(439) 00:13:47.478 fused_ordering(440) 00:13:47.478 fused_ordering(441) 00:13:47.478 fused_ordering(442) 00:13:47.478 fused_ordering(443) 00:13:47.478 fused_ordering(444) 00:13:47.478 fused_ordering(445) 00:13:47.478 fused_ordering(446) 00:13:47.478 fused_ordering(447) 00:13:47.478 fused_ordering(448) 00:13:47.478 fused_ordering(449) 00:13:47.478 fused_ordering(450) 00:13:47.478 fused_ordering(451) 00:13:47.478 fused_ordering(452) 00:13:47.478 fused_ordering(453) 00:13:47.478 fused_ordering(454) 00:13:47.478 fused_ordering(455) 00:13:47.478 fused_ordering(456) 00:13:47.478 fused_ordering(457) 00:13:47.478 fused_ordering(458) 00:13:47.478 fused_ordering(459) 00:13:47.478 fused_ordering(460) 00:13:47.478 fused_ordering(461) 00:13:47.478 fused_ordering(462) 00:13:47.478 fused_ordering(463) 00:13:47.478 fused_ordering(464) 00:13:47.478 fused_ordering(465) 00:13:47.478 fused_ordering(466) 00:13:47.478 fused_ordering(467) 00:13:47.478 fused_ordering(468) 00:13:47.478 fused_ordering(469) 00:13:47.478 fused_ordering(470) 00:13:47.478 fused_ordering(471) 00:13:47.478 fused_ordering(472) 00:13:47.478 fused_ordering(473) 00:13:47.478 fused_ordering(474) 00:13:47.478 fused_ordering(475) 00:13:47.478 fused_ordering(476) 00:13:47.478 fused_ordering(477) 00:13:47.478 fused_ordering(478) 00:13:47.478 fused_ordering(479) 00:13:47.478 fused_ordering(480) 00:13:47.478 fused_ordering(481) 00:13:47.478 fused_ordering(482) 00:13:47.478 fused_ordering(483) 00:13:47.478 fused_ordering(484) 00:13:47.478 fused_ordering(485) 00:13:47.478 fused_ordering(486) 00:13:47.478 fused_ordering(487) 00:13:47.478 fused_ordering(488) 00:13:47.478 fused_ordering(489) 00:13:47.478 fused_ordering(490) 00:13:47.478 fused_ordering(491) 00:13:47.478 fused_ordering(492) 00:13:47.478 fused_ordering(493) 00:13:47.478 fused_ordering(494) 00:13:47.478 fused_ordering(495) 00:13:47.478 fused_ordering(496) 00:13:47.478 fused_ordering(497) 00:13:47.478 fused_ordering(498) 00:13:47.478 fused_ordering(499) 00:13:47.478 fused_ordering(500) 00:13:47.478 fused_ordering(501) 00:13:47.478 fused_ordering(502) 00:13:47.478 fused_ordering(503) 00:13:47.478 fused_ordering(504) 00:13:47.478 fused_ordering(505) 00:13:47.478 fused_ordering(506) 00:13:47.478 fused_ordering(507) 00:13:47.478 fused_ordering(508) 00:13:47.478 fused_ordering(509) 00:13:47.478 fused_ordering(510) 00:13:47.478 fused_ordering(511) 00:13:47.478 fused_ordering(512) 00:13:47.478 fused_ordering(513) 00:13:47.478 fused_ordering(514) 00:13:47.478 fused_ordering(515) 00:13:47.478 fused_ordering(516) 00:13:47.478 fused_ordering(517) 00:13:47.478 fused_ordering(518) 00:13:47.478 fused_ordering(519) 00:13:47.478 fused_ordering(520) 00:13:47.478 fused_ordering(521) 00:13:47.478 fused_ordering(522) 00:13:47.478 fused_ordering(523) 00:13:47.478 fused_ordering(524) 00:13:47.478 fused_ordering(525) 00:13:47.478 fused_ordering(526) 00:13:47.478 fused_ordering(527) 00:13:47.478 fused_ordering(528) 00:13:47.478 fused_ordering(529) 00:13:47.478 fused_ordering(530) 00:13:47.478 fused_ordering(531) 00:13:47.478 fused_ordering(532) 00:13:47.478 fused_ordering(533) 00:13:47.478 fused_ordering(534) 00:13:47.478 fused_ordering(535) 00:13:47.478 fused_ordering(536) 00:13:47.478 fused_ordering(537) 00:13:47.478 fused_ordering(538) 00:13:47.478 fused_ordering(539) 00:13:47.478 fused_ordering(540) 00:13:47.478 fused_ordering(541) 00:13:47.478 fused_ordering(542) 00:13:47.478 fused_ordering(543) 00:13:47.478 fused_ordering(544) 00:13:47.478 fused_ordering(545) 00:13:47.478 fused_ordering(546) 00:13:47.478 fused_ordering(547) 00:13:47.478 fused_ordering(548) 00:13:47.478 fused_ordering(549) 00:13:47.478 fused_ordering(550) 00:13:47.478 fused_ordering(551) 00:13:47.478 fused_ordering(552) 00:13:47.478 fused_ordering(553) 00:13:47.478 fused_ordering(554) 00:13:47.478 fused_ordering(555) 00:13:47.478 fused_ordering(556) 00:13:47.478 fused_ordering(557) 00:13:47.478 fused_ordering(558) 00:13:47.478 fused_ordering(559) 00:13:47.478 fused_ordering(560) 00:13:47.478 fused_ordering(561) 00:13:47.478 fused_ordering(562) 00:13:47.478 fused_ordering(563) 00:13:47.478 fused_ordering(564) 00:13:47.478 fused_ordering(565) 00:13:47.478 fused_ordering(566) 00:13:47.478 fused_ordering(567) 00:13:47.478 fused_ordering(568) 00:13:47.478 fused_ordering(569) 00:13:47.478 fused_ordering(570) 00:13:47.478 fused_ordering(571) 00:13:47.478 fused_ordering(572) 00:13:47.478 fused_ordering(573) 00:13:47.478 fused_ordering(574) 00:13:47.478 fused_ordering(575) 00:13:47.478 fused_ordering(576) 00:13:47.478 fused_ordering(577) 00:13:47.478 fused_ordering(578) 00:13:47.478 fused_ordering(579) 00:13:47.478 fused_ordering(580) 00:13:47.478 fused_ordering(581) 00:13:47.478 fused_ordering(582) 00:13:47.478 fused_ordering(583) 00:13:47.478 fused_ordering(584) 00:13:47.478 fused_ordering(585) 00:13:47.478 fused_ordering(586) 00:13:47.478 fused_ordering(587) 00:13:47.478 fused_ordering(588) 00:13:47.478 fused_ordering(589) 00:13:47.478 fused_ordering(590) 00:13:47.478 fused_ordering(591) 00:13:47.478 fused_ordering(592) 00:13:47.478 fused_ordering(593) 00:13:47.478 fused_ordering(594) 00:13:47.478 fused_ordering(595) 00:13:47.478 fused_ordering(596) 00:13:47.478 fused_ordering(597) 00:13:47.478 fused_ordering(598) 00:13:47.478 fused_ordering(599) 00:13:47.478 fused_ordering(600) 00:13:47.478 fused_ordering(601) 00:13:47.478 fused_ordering(602) 00:13:47.478 fused_ordering(603) 00:13:47.478 fused_ordering(604) 00:13:47.478 fused_ordering(605) 00:13:47.478 fused_ordering(606) 00:13:47.478 fused_ordering(607) 00:13:47.478 fused_ordering(608) 00:13:47.478 fused_ordering(609) 00:13:47.478 fused_ordering(610) 00:13:47.478 fused_ordering(611) 00:13:47.478 fused_ordering(612) 00:13:47.478 fused_ordering(613) 00:13:47.478 fused_ordering(614) 00:13:47.478 fused_ordering(615) 00:13:48.448 fused_ordering(616) 00:13:48.448 fused_ordering(617) 00:13:48.448 fused_ordering(618) 00:13:48.448 fused_ordering(619) 00:13:48.448 fused_ordering(620) 00:13:48.448 fused_ordering(621) 00:13:48.448 fused_ordering(622) 00:13:48.448 fused_ordering(623) 00:13:48.448 fused_ordering(624) 00:13:48.448 fused_ordering(625) 00:13:48.448 fused_ordering(626) 00:13:48.448 fused_ordering(627) 00:13:48.448 fused_ordering(628) 00:13:48.448 fused_ordering(629) 00:13:48.448 fused_ordering(630) 00:13:48.448 fused_ordering(631) 00:13:48.448 fused_ordering(632) 00:13:48.448 fused_ordering(633) 00:13:48.448 fused_ordering(634) 00:13:48.448 fused_ordering(635) 00:13:48.448 fused_ordering(636) 00:13:48.448 fused_ordering(637) 00:13:48.448 fused_ordering(638) 00:13:48.448 fused_ordering(639) 00:13:48.448 fused_ordering(640) 00:13:48.448 fused_ordering(641) 00:13:48.448 fused_ordering(642) 00:13:48.448 fused_ordering(643) 00:13:48.448 fused_ordering(644) 00:13:48.448 fused_ordering(645) 00:13:48.448 fused_ordering(646) 00:13:48.448 fused_ordering(647) 00:13:48.448 fused_ordering(648) 00:13:48.448 fused_ordering(649) 00:13:48.448 fused_ordering(650) 00:13:48.448 fused_ordering(651) 00:13:48.448 fused_ordering(652) 00:13:48.448 fused_ordering(653) 00:13:48.448 fused_ordering(654) 00:13:48.448 fused_ordering(655) 00:13:48.448 fused_ordering(656) 00:13:48.448 fused_ordering(657) 00:13:48.448 fused_ordering(658) 00:13:48.448 fused_ordering(659) 00:13:48.448 fused_ordering(660) 00:13:48.448 fused_ordering(661) 00:13:48.448 fused_ordering(662) 00:13:48.448 fused_ordering(663) 00:13:48.448 fused_ordering(664) 00:13:48.448 fused_ordering(665) 00:13:48.448 fused_ordering(666) 00:13:48.448 fused_ordering(667) 00:13:48.448 fused_ordering(668) 00:13:48.448 fused_ordering(669) 00:13:48.448 fused_ordering(670) 00:13:48.448 fused_ordering(671) 00:13:48.448 fused_ordering(672) 00:13:48.448 fused_ordering(673) 00:13:48.448 fused_ordering(674) 00:13:48.448 fused_ordering(675) 00:13:48.448 fused_ordering(676) 00:13:48.448 fused_ordering(677) 00:13:48.448 fused_ordering(678) 00:13:48.448 fused_ordering(679) 00:13:48.448 fused_ordering(680) 00:13:48.448 fused_ordering(681) 00:13:48.448 fused_ordering(682) 00:13:48.448 fused_ordering(683) 00:13:48.448 fused_ordering(684) 00:13:48.448 fused_ordering(685) 00:13:48.448 fused_ordering(686) 00:13:48.448 fused_ordering(687) 00:13:48.448 fused_ordering(688) 00:13:48.448 fused_ordering(689) 00:13:48.448 fused_ordering(690) 00:13:48.448 fused_ordering(691) 00:13:48.448 fused_ordering(692) 00:13:48.448 fused_ordering(693) 00:13:48.448 fused_ordering(694) 00:13:48.448 fused_ordering(695) 00:13:48.448 fused_ordering(696) 00:13:48.448 fused_ordering(697) 00:13:48.448 fused_ordering(698) 00:13:48.448 fused_ordering(699) 00:13:48.448 fused_ordering(700) 00:13:48.448 fused_ordering(701) 00:13:48.448 fused_ordering(702) 00:13:48.448 fused_ordering(703) 00:13:48.448 fused_ordering(704) 00:13:48.448 fused_ordering(705) 00:13:48.448 fused_ordering(706) 00:13:48.448 fused_ordering(707) 00:13:48.448 fused_ordering(708) 00:13:48.448 fused_ordering(709) 00:13:48.448 fused_ordering(710) 00:13:48.448 fused_ordering(711) 00:13:48.448 fused_ordering(712) 00:13:48.448 fused_ordering(713) 00:13:48.448 fused_ordering(714) 00:13:48.448 fused_ordering(715) 00:13:48.448 fused_ordering(716) 00:13:48.448 fused_ordering(717) 00:13:48.448 fused_ordering(718) 00:13:48.448 fused_ordering(719) 00:13:48.448 fused_ordering(720) 00:13:48.448 fused_ordering(721) 00:13:48.448 fused_ordering(722) 00:13:48.448 fused_ordering(723) 00:13:48.448 fused_ordering(724) 00:13:48.448 fused_ordering(725) 00:13:48.448 fused_ordering(726) 00:13:48.448 fused_ordering(727) 00:13:48.448 fused_ordering(728) 00:13:48.448 fused_ordering(729) 00:13:48.448 fused_ordering(730) 00:13:48.448 fused_ordering(731) 00:13:48.448 fused_ordering(732) 00:13:48.448 fused_ordering(733) 00:13:48.448 fused_ordering(734) 00:13:48.448 fused_ordering(735) 00:13:48.448 fused_ordering(736) 00:13:48.448 fused_ordering(737) 00:13:48.448 fused_ordering(738) 00:13:48.448 fused_ordering(739) 00:13:48.448 fused_ordering(740) 00:13:48.448 fused_ordering(741) 00:13:48.448 fused_ordering(742) 00:13:48.448 fused_ordering(743) 00:13:48.448 fused_ordering(744) 00:13:48.448 fused_ordering(745) 00:13:48.448 fused_ordering(746) 00:13:48.448 fused_ordering(747) 00:13:48.448 fused_ordering(748) 00:13:48.448 fused_ordering(749) 00:13:48.448 fused_ordering(750) 00:13:48.448 fused_ordering(751) 00:13:48.448 fused_ordering(752) 00:13:48.448 fused_ordering(753) 00:13:48.448 fused_ordering(754) 00:13:48.448 fused_ordering(755) 00:13:48.448 fused_ordering(756) 00:13:48.448 fused_ordering(757) 00:13:48.448 fused_ordering(758) 00:13:48.448 fused_ordering(759) 00:13:48.448 fused_ordering(760) 00:13:48.448 fused_ordering(761) 00:13:48.448 fused_ordering(762) 00:13:48.448 fused_ordering(763) 00:13:48.448 fused_ordering(764) 00:13:48.448 fused_ordering(765) 00:13:48.448 fused_ordering(766) 00:13:48.448 fused_ordering(767) 00:13:48.448 fused_ordering(768) 00:13:48.448 fused_ordering(769) 00:13:48.448 fused_ordering(770) 00:13:48.448 fused_ordering(771) 00:13:48.448 fused_ordering(772) 00:13:48.448 fused_ordering(773) 00:13:48.448 fused_ordering(774) 00:13:48.448 fused_ordering(775) 00:13:48.448 fused_ordering(776) 00:13:48.448 fused_ordering(777) 00:13:48.448 fused_ordering(778) 00:13:48.448 fused_ordering(779) 00:13:48.448 fused_ordering(780) 00:13:48.448 fused_ordering(781) 00:13:48.448 fused_ordering(782) 00:13:48.448 fused_ordering(783) 00:13:48.448 fused_ordering(784) 00:13:48.448 fused_ordering(785) 00:13:48.448 fused_ordering(786) 00:13:48.448 fused_ordering(787) 00:13:48.448 fused_ordering(788) 00:13:48.448 fused_ordering(789) 00:13:48.448 fused_ordering(790) 00:13:48.448 fused_ordering(791) 00:13:48.448 fused_ordering(792) 00:13:48.448 fused_ordering(793) 00:13:48.448 fused_ordering(794) 00:13:48.448 fused_ordering(795) 00:13:48.448 fused_ordering(796) 00:13:48.448 fused_ordering(797) 00:13:48.448 fused_ordering(798) 00:13:48.448 fused_ordering(799) 00:13:48.448 fused_ordering(800) 00:13:48.448 fused_ordering(801) 00:13:48.448 fused_ordering(802) 00:13:48.448 fused_ordering(803) 00:13:48.448 fused_ordering(804) 00:13:48.448 fused_ordering(805) 00:13:48.448 fused_ordering(806) 00:13:48.448 fused_ordering(807) 00:13:48.448 fused_ordering(808) 00:13:48.448 fused_ordering(809) 00:13:48.448 fused_ordering(810) 00:13:48.448 fused_ordering(811) 00:13:48.448 fused_ordering(812) 00:13:48.448 fused_ordering(813) 00:13:48.448 fused_ordering(814) 00:13:48.448 fused_ordering(815) 00:13:48.448 fused_ordering(816) 00:13:48.448 fused_ordering(817) 00:13:48.448 fused_ordering(818) 00:13:48.448 fused_ordering(819) 00:13:48.448 fused_ordering(820) 00:13:49.382 fused_ordering(821) 00:13:49.382 fused_ordering(822) 00:13:49.382 fused_ordering(823) 00:13:49.382 fused_ordering(824) 00:13:49.382 fused_ordering(825) 00:13:49.382 fused_ordering(826) 00:13:49.382 fused_ordering(827) 00:13:49.382 fused_ordering(828) 00:13:49.382 fused_ordering(829) 00:13:49.382 fused_ordering(830) 00:13:49.382 fused_ordering(831) 00:13:49.382 fused_ordering(832) 00:13:49.382 fused_ordering(833) 00:13:49.382 fused_ordering(834) 00:13:49.382 fused_ordering(835) 00:13:49.382 fused_ordering(836) 00:13:49.382 fused_ordering(837) 00:13:49.382 fused_ordering(838) 00:13:49.382 fused_ordering(839) 00:13:49.382 fused_ordering(840) 00:13:49.382 fused_ordering(841) 00:13:49.382 fused_ordering(842) 00:13:49.382 fused_ordering(843) 00:13:49.382 fused_ordering(844) 00:13:49.382 fused_ordering(845) 00:13:49.382 fused_ordering(846) 00:13:49.382 fused_ordering(847) 00:13:49.382 fused_ordering(848) 00:13:49.382 fused_ordering(849) 00:13:49.382 fused_ordering(850) 00:13:49.382 fused_ordering(851) 00:13:49.382 fused_ordering(852) 00:13:49.382 fused_ordering(853) 00:13:49.382 fused_ordering(854) 00:13:49.382 fused_ordering(855) 00:13:49.382 fused_ordering(856) 00:13:49.382 fused_ordering(857) 00:13:49.382 fused_ordering(858) 00:13:49.382 fused_ordering(859) 00:13:49.382 fused_ordering(860) 00:13:49.382 fused_ordering(861) 00:13:49.382 fused_ordering(862) 00:13:49.382 fused_ordering(863) 00:13:49.382 fused_ordering(864) 00:13:49.382 fused_ordering(865) 00:13:49.382 fused_ordering(866) 00:13:49.382 fused_ordering(867) 00:13:49.382 fused_ordering(868) 00:13:49.382 fused_ordering(869) 00:13:49.382 fused_ordering(870) 00:13:49.382 fused_ordering(871) 00:13:49.382 fused_ordering(872) 00:13:49.382 fused_ordering(873) 00:13:49.382 fused_ordering(874) 00:13:49.382 fused_ordering(875) 00:13:49.382 fused_ordering(876) 00:13:49.382 fused_ordering(877) 00:13:49.382 fused_ordering(878) 00:13:49.382 fused_ordering(879) 00:13:49.382 fused_ordering(880) 00:13:49.382 fused_ordering(881) 00:13:49.382 fused_ordering(882) 00:13:49.382 fused_ordering(883) 00:13:49.382 fused_ordering(884) 00:13:49.382 fused_ordering(885) 00:13:49.382 fused_ordering(886) 00:13:49.382 fused_ordering(887) 00:13:49.382 fused_ordering(888) 00:13:49.382 fused_ordering(889) 00:13:49.382 fused_ordering(890) 00:13:49.382 fused_ordering(891) 00:13:49.382 fused_ordering(892) 00:13:49.382 fused_ordering(893) 00:13:49.382 fused_ordering(894) 00:13:49.382 fused_ordering(895) 00:13:49.382 fused_ordering(896) 00:13:49.382 fused_ordering(897) 00:13:49.382 fused_ordering(898) 00:13:49.382 fused_ordering(899) 00:13:49.382 fused_ordering(900) 00:13:49.382 fused_ordering(901) 00:13:49.382 fused_ordering(902) 00:13:49.382 fused_ordering(903) 00:13:49.382 fused_ordering(904) 00:13:49.382 fused_ordering(905) 00:13:49.382 fused_ordering(906) 00:13:49.382 fused_ordering(907) 00:13:49.382 fused_ordering(908) 00:13:49.382 fused_ordering(909) 00:13:49.382 fused_ordering(910) 00:13:49.382 fused_ordering(911) 00:13:49.382 fused_ordering(912) 00:13:49.382 fused_ordering(913) 00:13:49.382 fused_ordering(914) 00:13:49.382 fused_ordering(915) 00:13:49.382 fused_ordering(916) 00:13:49.382 fused_ordering(917) 00:13:49.382 fused_ordering(918) 00:13:49.382 fused_ordering(919) 00:13:49.382 fused_ordering(920) 00:13:49.382 fused_ordering(921) 00:13:49.382 fused_ordering(922) 00:13:49.382 fused_ordering(923) 00:13:49.382 fused_ordering(924) 00:13:49.382 fused_ordering(925) 00:13:49.382 fused_ordering(926) 00:13:49.382 fused_ordering(927) 00:13:49.382 fused_ordering(928) 00:13:49.382 fused_ordering(929) 00:13:49.382 fused_ordering(930) 00:13:49.382 fused_ordering(931) 00:13:49.382 fused_ordering(932) 00:13:49.382 fused_ordering(933) 00:13:49.382 fused_ordering(934) 00:13:49.382 fused_ordering(935) 00:13:49.382 fused_ordering(936) 00:13:49.382 fused_ordering(937) 00:13:49.382 fused_ordering(938) 00:13:49.382 fused_ordering(939) 00:13:49.382 fused_ordering(940) 00:13:49.382 fused_ordering(941) 00:13:49.382 fused_ordering(942) 00:13:49.382 fused_ordering(943) 00:13:49.382 fused_ordering(944) 00:13:49.382 fused_ordering(945) 00:13:49.382 fused_ordering(946) 00:13:49.382 fused_ordering(947) 00:13:49.382 fused_ordering(948) 00:13:49.382 fused_ordering(949) 00:13:49.382 fused_ordering(950) 00:13:49.382 fused_ordering(951) 00:13:49.382 fused_ordering(952) 00:13:49.382 fused_ordering(953) 00:13:49.382 fused_ordering(954) 00:13:49.382 fused_ordering(955) 00:13:49.382 fused_ordering(956) 00:13:49.382 fused_ordering(957) 00:13:49.382 fused_ordering(958) 00:13:49.382 fused_ordering(959) 00:13:49.382 fused_ordering(960) 00:13:49.382 fused_ordering(961) 00:13:49.382 fused_ordering(962) 00:13:49.382 fused_ordering(963) 00:13:49.382 fused_ordering(964) 00:13:49.382 fused_ordering(965) 00:13:49.382 fused_ordering(966) 00:13:49.382 fused_ordering(967) 00:13:49.382 fused_ordering(968) 00:13:49.382 fused_ordering(969) 00:13:49.382 fused_ordering(970) 00:13:49.382 fused_ordering(971) 00:13:49.382 fused_ordering(972) 00:13:49.382 fused_ordering(973) 00:13:49.382 fused_ordering(974) 00:13:49.382 fused_ordering(975) 00:13:49.382 fused_ordering(976) 00:13:49.382 fused_ordering(977) 00:13:49.382 fused_ordering(978) 00:13:49.382 fused_ordering(979) 00:13:49.382 fused_ordering(980) 00:13:49.382 fused_ordering(981) 00:13:49.382 fused_ordering(982) 00:13:49.382 fused_ordering(983) 00:13:49.382 fused_ordering(984) 00:13:49.382 fused_ordering(985) 00:13:49.382 fused_ordering(986) 00:13:49.382 fused_ordering(987) 00:13:49.382 fused_ordering(988) 00:13:49.382 fused_ordering(989) 00:13:49.382 fused_ordering(990) 00:13:49.382 fused_ordering(991) 00:13:49.382 fused_ordering(992) 00:13:49.382 fused_ordering(993) 00:13:49.382 fused_ordering(994) 00:13:49.382 fused_ordering(995) 00:13:49.382 fused_ordering(996) 00:13:49.383 fused_ordering(997) 00:13:49.383 fused_ordering(998) 00:13:49.383 fused_ordering(999) 00:13:49.383 fused_ordering(1000) 00:13:49.383 fused_ordering(1001) 00:13:49.383 fused_ordering(1002) 00:13:49.383 fused_ordering(1003) 00:13:49.383 fused_ordering(1004) 00:13:49.383 fused_ordering(1005) 00:13:49.383 fused_ordering(1006) 00:13:49.383 fused_ordering(1007) 00:13:49.383 fused_ordering(1008) 00:13:49.383 fused_ordering(1009) 00:13:49.383 fused_ordering(1010) 00:13:49.383 fused_ordering(1011) 00:13:49.383 fused_ordering(1012) 00:13:49.383 fused_ordering(1013) 00:13:49.383 fused_ordering(1014) 00:13:49.383 fused_ordering(1015) 00:13:49.383 fused_ordering(1016) 00:13:49.383 fused_ordering(1017) 00:13:49.383 fused_ordering(1018) 00:13:49.383 fused_ordering(1019) 00:13:49.383 fused_ordering(1020) 00:13:49.383 fused_ordering(1021) 00:13:49.383 fused_ordering(1022) 00:13:49.383 fused_ordering(1023) 00:13:49.383 18:43:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:49.383 18:43:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:49.383 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:49.383 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:49.383 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:49.383 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:49.383 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.383 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:49.383 rmmod nvme_tcp 00:13:49.383 rmmod nvme_fabrics 00:13:49.383 rmmod nvme_keyring 00:13:49.641 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.641 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:49.641 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:49.641 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1337657 ']' 00:13:49.641 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1337657 00:13:49.641 18:43:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 1337657 ']' 00:13:49.641 18:43:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 1337657 00:13:49.641 18:43:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:13:49.641 18:43:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:49.641 18:43:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1337657 00:13:49.641 18:43:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:49.641 18:43:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:49.641 18:43:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1337657' 00:13:49.641 killing process with pid 1337657 00:13:49.641 18:43:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 1337657 00:13:49.641 18:43:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 1337657 00:13:49.900 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:49.900 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:49.900 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:49.900 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:49.900 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:49.900 18:43:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.900 18:43:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.900 18:43:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.804 18:44:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:51.804 00:13:51.804 real 0m10.296s 00:13:51.804 user 0m8.006s 00:13:51.804 sys 0m5.804s 00:13:51.804 18:44:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:51.804 18:44:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:51.804 ************************************ 00:13:51.804 END TEST nvmf_fused_ordering 00:13:51.804 ************************************ 00:13:51.804 18:44:02 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:51.804 18:44:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:51.804 18:44:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:51.804 18:44:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:51.804 ************************************ 00:13:51.804 START TEST nvmf_delete_subsystem 00:13:51.804 ************************************ 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:51.804 * Looking for test storage... 00:13:51.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.804 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:51.805 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:51.805 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:51.805 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.805 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.805 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.805 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:51.805 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:51.805 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:51.805 18:44:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.705 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:53.706 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:53.706 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:53.706 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:53.706 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:53.706 18:44:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:53.706 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.706 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:53.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:13:53.965 00:13:53.965 --- 10.0.0.2 ping statistics --- 00:13:53.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.965 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:13:53.965 00:13:53.965 --- 10.0.0.1 ping statistics --- 00:13:53.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.965 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1340265 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1340265 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 1340265 ']' 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:53.965 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:53.965 [2024-07-20 18:44:04.190354] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:53.965 [2024-07-20 18:44:04.190436] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.965 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.965 [2024-07-20 18:44:04.257540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:54.223 [2024-07-20 18:44:04.346911] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.223 [2024-07-20 18:44:04.346971] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.223 [2024-07-20 18:44:04.347000] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.223 [2024-07-20 18:44:04.347012] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.223 [2024-07-20 18:44:04.347023] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.223 [2024-07-20 18:44:04.347078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.223 [2024-07-20 18:44:04.347083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 [2024-07-20 18:44:04.489669] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 [2024-07-20 18:44:04.505903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 NULL1 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 Delay0 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1340355 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:54.223 18:44:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:54.481 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.481 [2024-07-20 18:44:04.590712] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:56.376 18:44:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.376 18:44:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.376 18:44:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 [2024-07-20 18:44:06.641341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036ce0 is same with the state(5) to be set 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 [2024-07-20 18:44:06.643605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3824000c00 is same with the state(5) to be set 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 [2024-07-20 18:44:06.644620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f382400bfe0 is same with the state(5) to be set 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 Read completed with error (sct=0, sc=8) 00:13:56.376 starting I/O failed: -6 00:13:56.376 Write completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Write completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 Write completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Write completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Write completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 Write completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Write completed with error (sct=0, sc=8) 00:13:56.377 Write completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Write completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 Write completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 Read completed with error (sct=0, sc=8) 00:13:56.377 starting I/O failed: -6 00:13:56.377 [2024-07-20 18:44:06.645244] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f382400c600 is same with the state(5) to be set 00:13:57.308 [2024-07-20 18:44:07.610714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053620 is same with the state(5) to be set 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Write completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Write completed with error (sct=0, sc=8) 00:13:57.566 Write completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Write completed with error (sct=0, sc=8) 00:13:57.566 Write completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Write completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Write completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Write completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Write completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 [2024-07-20 18:44:07.637374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f382400c2f0 is same with the state(5) to be set 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.566 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 [2024-07-20 18:44:07.646419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036b00 is same with the state(5) to be set 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 [2024-07-20 18:44:07.646713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103bd40 is same with the state(5) to be set 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Write completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 Read completed with error (sct=0, sc=8) 00:13:57.567 [2024-07-20 18:44:07.646881] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036ec0 is same with the state(5) to be set 00:13:57.567 Initializing NVMe Controllers 00:13:57.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:57.567 Controller IO queue size 128, less than required. 00:13:57.567 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:57.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:57.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:57.567 Initialization complete. Launching workers. 00:13:57.567 ======================================================== 00:13:57.567 Latency(us) 00:13:57.567 Device Information : IOPS MiB/s Average min max 00:13:57.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.38 0.08 1000103.34 2385.76 2005537.27 00:13:57.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.38 0.08 908076.95 445.13 1993147.88 00:13:57.567 ======================================================== 00:13:57.567 Total : 326.77 0.16 954090.14 445.13 2005537.27 00:13:57.567 00:13:57.567 [2024-07-20 18:44:07.647689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1053620 (9): Bad file descriptor 00:13:57.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:57.567 18:44:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.567 18:44:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:57.567 18:44:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1340355 00:13:57.567 18:44:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1340355 00:13:57.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1340355) - No such process 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1340355 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1340355 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1340355 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:57.877 [2024-07-20 18:44:08.165733] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.877 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.878 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.878 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:57.878 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.878 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1340816 00:13:57.878 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:57.878 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:57.878 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1340816 00:13:57.878 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:58.134 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.134 [2024-07-20 18:44:08.222844] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:58.391 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:58.391 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1340816 00:13:58.391 18:44:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:58.956 18:44:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:58.956 18:44:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1340816 00:13:58.956 18:44:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:59.520 18:44:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:59.520 18:44:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1340816 00:13:59.520 18:44:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:00.086 18:44:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:00.086 18:44:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1340816 00:14:00.086 18:44:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:00.653 18:44:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:00.653 18:44:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1340816 00:14:00.654 18:44:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:00.911 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:00.911 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1340816 00:14:00.911 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:01.168 Initializing NVMe Controllers 00:14:01.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:01.168 Controller IO queue size 128, less than required. 00:14:01.168 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:01.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:01.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:01.168 Initialization complete. Launching workers. 00:14:01.168 ======================================================== 00:14:01.168 Latency(us) 00:14:01.168 Device Information : IOPS MiB/s Average min max 00:14:01.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004042.53 1000287.20 1040884.89 00:14:01.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005712.77 1000327.46 1043103.73 00:14:01.169 ======================================================== 00:14:01.169 Total : 256.00 0.12 1004877.65 1000287.20 1043103.73 00:14:01.169 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1340816 00:14:01.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1340816) - No such process 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1340816 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:01.426 rmmod nvme_tcp 00:14:01.426 rmmod nvme_fabrics 00:14:01.426 rmmod nvme_keyring 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1340265 ']' 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1340265 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 1340265 ']' 00:14:01.426 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 1340265 00:14:01.684 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:01.684 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:01.684 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1340265 00:14:01.684 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:01.684 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:01.684 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1340265' 00:14:01.684 killing process with pid 1340265 00:14:01.684 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 1340265 00:14:01.684 18:44:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 1340265 00:14:01.942 18:44:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:01.942 18:44:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:01.942 18:44:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:01.942 18:44:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.942 18:44:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:01.942 18:44:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.942 18:44:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.942 18:44:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.842 18:44:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:03.842 00:14:03.842 real 0m12.000s 00:14:03.842 user 0m27.341s 00:14:03.842 sys 0m2.876s 00:14:03.842 18:44:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:03.842 18:44:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:03.842 ************************************ 00:14:03.842 END TEST nvmf_delete_subsystem 00:14:03.842 ************************************ 00:14:03.842 18:44:14 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:03.842 18:44:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:03.842 18:44:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:03.842 18:44:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:03.842 ************************************ 00:14:03.842 START TEST nvmf_ns_masking 00:14:03.842 ************************************ 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:03.842 * Looking for test storage... 00:14:03.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.842 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=9ef08e92-1c82-4644-b016-4ef6c0dd33ec 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:04.100 18:44:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:05.996 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:05.996 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:05.996 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:05.996 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.996 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:06.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:14:06.253 00:14:06.253 --- 10.0.0.2 ping statistics --- 00:14:06.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.253 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:14:06.253 00:14:06.253 --- 10.0.0.1 ping statistics --- 00:14:06.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.253 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1343156 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1343156 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 1343156 ']' 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:06.253 18:44:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:06.253 [2024-07-20 18:44:16.521380] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:06.253 [2024-07-20 18:44:16.521454] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.253 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.509 [2024-07-20 18:44:16.587429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.509 [2024-07-20 18:44:16.673944] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.509 [2024-07-20 18:44:16.674000] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.509 [2024-07-20 18:44:16.674030] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.509 [2024-07-20 18:44:16.674042] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.509 [2024-07-20 18:44:16.674052] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.509 [2024-07-20 18:44:16.674182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.509 [2024-07-20 18:44:16.674249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.509 [2024-07-20 18:44:16.674299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.509 [2024-07-20 18:44:16.674302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.509 18:44:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:06.509 18:44:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:06.509 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.509 18:44:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:06.509 18:44:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:06.509 18:44:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.509 18:44:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:06.765 [2024-07-20 18:44:17.029049] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.765 18:44:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:06.765 18:44:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:06.765 18:44:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:07.021 Malloc1 00:14:07.021 18:44:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:07.278 Malloc2 00:14:07.278 18:44:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:07.534 18:44:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:07.789 18:44:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.045 [2024-07-20 18:44:18.304617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.045 18:44:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:08.045 18:44:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9ef08e92-1c82-4644-b016-4ef6c0dd33ec -a 10.0.0.2 -s 4420 -i 4 00:14:08.302 18:44:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:08.302 18:44:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:08.302 18:44:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.302 18:44:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:08.302 18:44:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:10.194 18:44:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:10.194 18:44:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:10.194 18:44:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:10.194 18:44:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:10.194 18:44:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:10.194 18:44:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:10.194 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:10.194 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:10.194 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:10.194 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:10.194 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:10.194 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:10.194 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:10.194 [ 0]:0x1 00:14:10.194 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:10.194 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:10.450 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d10bf17b51e944e38f73040682bb5305 00:14:10.450 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d10bf17b51e944e38f73040682bb5305 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.450 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:10.723 [ 0]:0x1 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d10bf17b51e944e38f73040682bb5305 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d10bf17b51e944e38f73040682bb5305 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:10.723 [ 1]:0x2 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1abbd43e35bc4797818999e20f595a77 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1abbd43e35bc4797818999e20f595a77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:10.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.723 18:44:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.980 18:44:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:11.236 18:44:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:11.236 18:44:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9ef08e92-1c82-4644-b016-4ef6c0dd33ec -a 10.0.0.2 -s 4420 -i 4 00:14:11.493 18:44:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:11.493 18:44:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:11.493 18:44:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:11.493 18:44:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:11.493 18:44:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:11.493 18:44:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:13.383 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:13.640 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:13.640 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.640 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:13.640 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:13.640 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:13.640 18:44:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:13.640 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:13.640 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:13.640 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:13.640 [ 0]:0x2 00:14:13.640 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:13.640 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:13.640 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1abbd43e35bc4797818999e20f595a77 00:14:13.640 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1abbd43e35bc4797818999e20f595a77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.640 18:44:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:13.896 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:13.896 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:13.896 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:13.896 [ 0]:0x1 00:14:13.896 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:13.896 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:13.896 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d10bf17b51e944e38f73040682bb5305 00:14:13.896 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d10bf17b51e944e38f73040682bb5305 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.896 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:13.896 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:13.896 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:13.896 [ 1]:0x2 00:14:13.896 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:13.896 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:13.896 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1abbd43e35bc4797818999e20f595a77 00:14:13.896 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1abbd43e35bc4797818999e20f595a77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.896 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:14.153 [ 0]:0x2 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1abbd43e35bc4797818999e20f595a77 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1abbd43e35bc4797818999e20f595a77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.153 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:14.410 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:14.410 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9ef08e92-1c82-4644-b016-4ef6c0dd33ec -a 10.0.0.2 -s 4420 -i 4 00:14:14.667 18:44:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:14.667 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:14.667 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.667 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:14.667 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:14.667 18:44:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:16.568 18:44:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:16.568 18:44:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:16.568 18:44:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.568 18:44:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:16.568 18:44:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.568 18:44:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:16.846 [ 0]:0x1 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d10bf17b51e944e38f73040682bb5305 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d10bf17b51e944e38f73040682bb5305 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:16.846 [ 1]:0x2 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:16.846 18:44:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:16.846 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1abbd43e35bc4797818999e20f595a77 00:14:16.846 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1abbd43e35bc4797818999e20f595a77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.846 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:17.104 [ 0]:0x2 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1abbd43e35bc4797818999e20f595a77 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1abbd43e35bc4797818999e20f595a77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:17.104 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:17.361 [2024-07-20 18:44:27.558112] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:17.361 request: 00:14:17.361 { 00:14:17.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:17.361 "nsid": 2, 00:14:17.361 "host": "nqn.2016-06.io.spdk:host1", 00:14:17.361 "method": "nvmf_ns_remove_host", 00:14:17.361 "req_id": 1 00:14:17.361 } 00:14:17.361 Got JSON-RPC error response 00:14:17.361 response: 00:14:17.361 { 00:14:17.361 "code": -32602, 00:14:17.361 "message": "Invalid parameters" 00:14:17.361 } 00:14:17.361 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:17.361 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:17.362 [ 0]:0x2 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1abbd43e35bc4797818999e20f595a77 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1abbd43e35bc4797818999e20f595a77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:17.362 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.620 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.620 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:17.620 18:44:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:17.620 18:44:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:17.620 18:44:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:17.620 18:44:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:17.620 18:44:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:17.620 18:44:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:17.620 18:44:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:17.878 rmmod nvme_tcp 00:14:17.878 rmmod nvme_fabrics 00:14:17.878 rmmod nvme_keyring 00:14:17.878 18:44:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:17.878 18:44:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:17.878 18:44:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:17.878 18:44:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1343156 ']' 00:14:17.878 18:44:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1343156 00:14:17.878 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 1343156 ']' 00:14:17.878 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 1343156 00:14:17.878 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:17.878 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:17.878 18:44:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1343156 00:14:17.878 18:44:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:17.878 18:44:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:17.878 18:44:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1343156' 00:14:17.878 killing process with pid 1343156 00:14:17.878 18:44:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 1343156 00:14:17.878 18:44:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 1343156 00:14:18.187 18:44:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:18.187 18:44:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:18.187 18:44:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:18.187 18:44:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:18.187 18:44:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:18.187 18:44:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.187 18:44:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.187 18:44:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.082 18:44:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:20.083 00:14:20.083 real 0m16.201s 00:14:20.083 user 0m49.515s 00:14:20.083 sys 0m3.749s 00:14:20.083 18:44:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:20.083 18:44:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:20.083 ************************************ 00:14:20.083 END TEST nvmf_ns_masking 00:14:20.083 ************************************ 00:14:20.083 18:44:30 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:20.083 18:44:30 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:20.083 18:44:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:20.083 18:44:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:20.083 18:44:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:20.083 ************************************ 00:14:20.083 START TEST nvmf_nvme_cli 00:14:20.083 ************************************ 00:14:20.083 18:44:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:20.083 * Looking for test storage... 00:14:20.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:20.340 18:44:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:22.240 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:22.241 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:22.241 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:22.241 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:22.241 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:22.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:22.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:14:22.241 00:14:22.241 --- 10.0.0.2 ping statistics --- 00:14:22.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.241 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:22.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:22.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:14:22.241 00:14:22.241 --- 10.0.0.1 ping statistics --- 00:14:22.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.241 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1346581 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1346581 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 1346581 ']' 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:22.241 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.499 [2024-07-20 18:44:32.593703] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:22.499 [2024-07-20 18:44:32.593799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.499 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.499 [2024-07-20 18:44:32.660765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:22.499 [2024-07-20 18:44:32.747536] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.499 [2024-07-20 18:44:32.747602] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.499 [2024-07-20 18:44:32.747630] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.499 [2024-07-20 18:44:32.747642] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.499 [2024-07-20 18:44:32.747652] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.499 [2024-07-20 18:44:32.747732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.499 [2024-07-20 18:44:32.747806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.499 [2024-07-20 18:44:32.747844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:22.499 [2024-07-20 18:44:32.747847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.757 [2024-07-20 18:44:32.890549] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.757 Malloc0 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.757 Malloc1 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.757 [2024-07-20 18:44:32.974548] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.757 18:44:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:23.014 00:14:23.014 Discovery Log Number of Records 2, Generation counter 2 00:14:23.014 =====Discovery Log Entry 0====== 00:14:23.014 trtype: tcp 00:14:23.014 adrfam: ipv4 00:14:23.014 subtype: current discovery subsystem 00:14:23.014 treq: not required 00:14:23.014 portid: 0 00:14:23.014 trsvcid: 4420 00:14:23.014 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:23.014 traddr: 10.0.0.2 00:14:23.014 eflags: explicit discovery connections, duplicate discovery information 00:14:23.014 sectype: none 00:14:23.014 =====Discovery Log Entry 1====== 00:14:23.014 trtype: tcp 00:14:23.014 adrfam: ipv4 00:14:23.014 subtype: nvme subsystem 00:14:23.014 treq: not required 00:14:23.014 portid: 0 00:14:23.014 trsvcid: 4420 00:14:23.014 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:23.014 traddr: 10.0.0.2 00:14:23.014 eflags: none 00:14:23.014 sectype: none 00:14:23.014 18:44:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:23.014 18:44:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:23.014 18:44:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:23.014 18:44:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:23.014 18:44:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:23.014 18:44:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:23.014 18:44:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:23.014 18:44:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:23.014 18:44:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:23.014 18:44:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:23.014 18:44:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.577 18:44:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:23.577 18:44:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:14:23.577 18:44:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.577 18:44:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:23.577 18:44:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:23.577 18:44:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:14:25.471 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:25.472 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:25.472 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.472 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:25.472 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.472 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:14:25.472 18:44:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:25.472 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:25.472 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.472 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:25.730 /dev/nvme0n1 ]] 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.730 18:44:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:25.731 rmmod nvme_tcp 00:14:25.731 rmmod nvme_fabrics 00:14:25.731 rmmod nvme_keyring 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1346581 ']' 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1346581 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 1346581 ']' 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 1346581 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1346581 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1346581' 00:14:25.731 killing process with pid 1346581 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 1346581 00:14:25.731 18:44:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 1346581 00:14:25.989 18:44:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:25.989 18:44:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:25.989 18:44:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:25.989 18:44:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:25.989 18:44:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:25.989 18:44:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.989 18:44:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.989 18:44:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.526 18:44:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:28.526 00:14:28.526 real 0m7.972s 00:14:28.526 user 0m14.460s 00:14:28.526 sys 0m2.132s 00:14:28.526 18:44:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:28.526 18:44:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.526 ************************************ 00:14:28.526 END TEST nvmf_nvme_cli 00:14:28.526 ************************************ 00:14:28.526 18:44:38 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:28.526 18:44:38 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:28.526 18:44:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:28.526 18:44:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:28.526 18:44:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:28.526 ************************************ 00:14:28.526 START TEST nvmf_vfio_user 00:14:28.526 ************************************ 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:28.526 * Looking for test storage... 00:14:28.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1347380 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1347380' 00:14:28.526 Process pid: 1347380 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1347380 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 1347380 ']' 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:28.526 [2024-07-20 18:44:38.488804] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:28.526 [2024-07-20 18:44:38.488905] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.526 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.526 [2024-07-20 18:44:38.560511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:28.526 [2024-07-20 18:44:38.658109] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.526 [2024-07-20 18:44:38.658171] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.526 [2024-07-20 18:44:38.658200] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.526 [2024-07-20 18:44:38.658212] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.526 [2024-07-20 18:44:38.658222] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.526 [2024-07-20 18:44:38.658290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.526 [2024-07-20 18:44:38.661819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.526 [2024-07-20 18:44:38.661865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.526 [2024-07-20 18:44:38.661860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:14:28.526 18:44:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:29.895 18:44:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:29.895 18:44:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:29.895 18:44:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:29.895 18:44:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:29.895 18:44:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:29.895 18:44:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:30.152 Malloc1 00:14:30.152 18:44:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:30.409 18:44:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:30.666 18:44:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:30.923 18:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:30.923 18:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:30.923 18:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:31.181 Malloc2 00:14:31.182 18:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:31.439 18:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:31.696 18:44:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:31.955 18:44:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:31.955 18:44:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:31.955 18:44:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:31.955 18:44:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:31.955 18:44:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:31.955 18:44:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:31.955 [2024-07-20 18:44:42.099558] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:31.955 [2024-07-20 18:44:42.099600] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347916 ] 00:14:31.955 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.955 [2024-07-20 18:44:42.135156] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:31.955 [2024-07-20 18:44:42.143310] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:31.955 [2024-07-20 18:44:42.143338] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4e56ea6000 00:14:31.955 [2024-07-20 18:44:42.144308] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:31.955 [2024-07-20 18:44:42.145297] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:31.955 [2024-07-20 18:44:42.146302] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:31.955 [2024-07-20 18:44:42.147307] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:31.955 [2024-07-20 18:44:42.148309] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:31.955 [2024-07-20 18:44:42.149314] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:31.955 [2024-07-20 18:44:42.150319] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:31.955 [2024-07-20 18:44:42.151328] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:31.955 [2024-07-20 18:44:42.152337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:31.955 [2024-07-20 18:44:42.152357] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4e55c58000 00:14:31.955 [2024-07-20 18:44:42.153481] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:31.955 [2024-07-20 18:44:42.169439] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:31.955 [2024-07-20 18:44:42.169476] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:31.955 [2024-07-20 18:44:42.174446] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:31.956 [2024-07-20 18:44:42.174504] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:31.956 [2024-07-20 18:44:42.174597] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:31.956 [2024-07-20 18:44:42.174635] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:31.956 [2024-07-20 18:44:42.174645] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:31.956 [2024-07-20 18:44:42.175449] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:31.956 [2024-07-20 18:44:42.175472] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:31.956 [2024-07-20 18:44:42.175486] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:31.956 [2024-07-20 18:44:42.176452] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:31.956 [2024-07-20 18:44:42.176469] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:31.956 [2024-07-20 18:44:42.176483] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:31.956 [2024-07-20 18:44:42.177458] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:31.956 [2024-07-20 18:44:42.177476] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:31.956 [2024-07-20 18:44:42.178461] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:31.956 [2024-07-20 18:44:42.178480] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:31.956 [2024-07-20 18:44:42.178489] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:31.956 [2024-07-20 18:44:42.178501] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:31.956 [2024-07-20 18:44:42.178610] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:31.956 [2024-07-20 18:44:42.178618] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:31.956 [2024-07-20 18:44:42.178627] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:31.956 [2024-07-20 18:44:42.179474] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:31.956 [2024-07-20 18:44:42.180472] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:31.956 [2024-07-20 18:44:42.181480] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:31.956 [2024-07-20 18:44:42.182478] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:31.956 [2024-07-20 18:44:42.182585] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:31.956 [2024-07-20 18:44:42.183493] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:31.956 [2024-07-20 18:44:42.183511] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:31.956 [2024-07-20 18:44:42.183520] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:31.956 [2024-07-20 18:44:42.183544] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:31.956 [2024-07-20 18:44:42.183557] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:31.956 [2024-07-20 18:44:42.183584] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:31.956 [2024-07-20 18:44:42.183593] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:31.956 [2024-07-20 18:44:42.183611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:31.956 [2024-07-20 18:44:42.183669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:31.956 [2024-07-20 18:44:42.183689] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:31.956 [2024-07-20 18:44:42.183698] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:31.956 [2024-07-20 18:44:42.183707] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:31.956 [2024-07-20 18:44:42.183714] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:31.956 [2024-07-20 18:44:42.183722] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:31.956 [2024-07-20 18:44:42.183730] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:31.956 [2024-07-20 18:44:42.183738] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:31.956 [2024-07-20 18:44:42.183750] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:31.956 [2024-07-20 18:44:42.183765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:31.956 [2024-07-20 18:44:42.183798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:31.956 [2024-07-20 18:44:42.183818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.956 [2024-07-20 18:44:42.183831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.956 [2024-07-20 18:44:42.183860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.956 [2024-07-20 18:44:42.183873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.956 [2024-07-20 18:44:42.183882] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:31.956 [2024-07-20 18:44:42.183898] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:31.956 [2024-07-20 18:44:42.183919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:31.956 [2024-07-20 18:44:42.183933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:31.956 [2024-07-20 18:44:42.183944] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:31.956 [2024-07-20 18:44:42.183953] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:31.956 [2024-07-20 18:44:42.183965] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:31.956 [2024-07-20 18:44:42.183979] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:31.956 [2024-07-20 18:44:42.183993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:31.956 [2024-07-20 18:44:42.184005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:31.956 [2024-07-20 18:44:42.184087] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:31.956 [2024-07-20 18:44:42.184104] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:31.956 [2024-07-20 18:44:42.184117] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:31.956 [2024-07-20 18:44:42.184126] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:31.956 [2024-07-20 18:44:42.184136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:31.956 [2024-07-20 18:44:42.184167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:31.956 [2024-07-20 18:44:42.184183] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:31.956 [2024-07-20 18:44:42.184198] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:31.956 [2024-07-20 18:44:42.184211] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:31.956 [2024-07-20 18:44:42.184223] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:31.957 [2024-07-20 18:44:42.184231] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:31.957 [2024-07-20 18:44:42.184241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:31.957 [2024-07-20 18:44:42.184261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:31.957 [2024-07-20 18:44:42.184281] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:31.957 [2024-07-20 18:44:42.184295] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:31.957 [2024-07-20 18:44:42.184306] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:31.957 [2024-07-20 18:44:42.184314] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:31.957 [2024-07-20 18:44:42.184323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:31.957 [2024-07-20 18:44:42.184338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:31.957 [2024-07-20 18:44:42.184353] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:31.957 [2024-07-20 18:44:42.184364] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:31.957 [2024-07-20 18:44:42.184377] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:31.957 [2024-07-20 18:44:42.184387] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:31.957 [2024-07-20 18:44:42.184396] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:31.957 [2024-07-20 18:44:42.184404] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:31.957 [2024-07-20 18:44:42.184412] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:31.957 [2024-07-20 18:44:42.184420] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:31.957 [2024-07-20 18:44:42.184449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:31.957 [2024-07-20 18:44:42.184467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:31.957 [2024-07-20 18:44:42.184485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:31.957 [2024-07-20 18:44:42.184497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:31.957 [2024-07-20 18:44:42.184512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:31.957 [2024-07-20 18:44:42.184527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:31.957 [2024-07-20 18:44:42.184543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:31.957 [2024-07-20 18:44:42.184554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:31.957 [2024-07-20 18:44:42.184571] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:31.957 [2024-07-20 18:44:42.184580] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:31.957 [2024-07-20 18:44:42.184586] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:31.957 [2024-07-20 18:44:42.184592] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:31.957 [2024-07-20 18:44:42.184601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:31.957 [2024-07-20 18:44:42.184612] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:31.957 [2024-07-20 18:44:42.184620] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:31.957 [2024-07-20 18:44:42.184629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:31.957 [2024-07-20 18:44:42.184640] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:31.957 [2024-07-20 18:44:42.184653] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:31.957 [2024-07-20 18:44:42.184662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:31.957 [2024-07-20 18:44:42.184674] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:31.957 [2024-07-20 18:44:42.184683] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:31.957 [2024-07-20 18:44:42.184691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:31.957 [2024-07-20 18:44:42.184702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:31.957 [2024-07-20 18:44:42.184722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:31.957 [2024-07-20 18:44:42.184737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:31.957 [2024-07-20 18:44:42.184752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:31.957 ===================================================== 00:14:31.957 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:31.957 ===================================================== 00:14:31.957 Controller Capabilities/Features 00:14:31.957 ================================ 00:14:31.957 Vendor ID: 4e58 00:14:31.957 Subsystem Vendor ID: 4e58 00:14:31.957 Serial Number: SPDK1 00:14:31.957 Model Number: SPDK bdev Controller 00:14:31.957 Firmware Version: 24.05.1 00:14:31.957 Recommended Arb Burst: 6 00:14:31.957 IEEE OUI Identifier: 8d 6b 50 00:14:31.957 Multi-path I/O 00:14:31.957 May have multiple subsystem ports: Yes 00:14:31.957 May have multiple controllers: Yes 00:14:31.957 Associated with SR-IOV VF: No 00:14:31.957 Max Data Transfer Size: 131072 00:14:31.957 Max Number of Namespaces: 32 00:14:31.957 Max Number of I/O Queues: 127 00:14:31.957 NVMe Specification Version (VS): 1.3 00:14:31.957 NVMe Specification Version (Identify): 1.3 00:14:31.957 Maximum Queue Entries: 256 00:14:31.957 Contiguous Queues Required: Yes 00:14:31.957 Arbitration Mechanisms Supported 00:14:31.957 Weighted Round Robin: Not Supported 00:14:31.957 Vendor Specific: Not Supported 00:14:31.957 Reset Timeout: 15000 ms 00:14:31.957 Doorbell Stride: 4 bytes 00:14:31.957 NVM Subsystem Reset: Not Supported 00:14:31.957 Command Sets Supported 00:14:31.957 NVM Command Set: Supported 00:14:31.957 Boot Partition: Not Supported 00:14:31.957 Memory Page Size Minimum: 4096 bytes 00:14:31.957 Memory Page Size Maximum: 4096 bytes 00:14:31.957 Persistent Memory Region: Not Supported 00:14:31.957 Optional Asynchronous Events Supported 00:14:31.957 Namespace Attribute Notices: Supported 00:14:31.957 Firmware Activation Notices: Not Supported 00:14:31.957 ANA Change Notices: Not Supported 00:14:31.957 PLE Aggregate Log Change Notices: Not Supported 00:14:31.957 LBA Status Info Alert Notices: Not Supported 00:14:31.957 EGE Aggregate Log Change Notices: Not Supported 00:14:31.957 Normal NVM Subsystem Shutdown event: Not Supported 00:14:31.957 Zone Descriptor Change Notices: Not Supported 00:14:31.957 Discovery Log Change Notices: Not Supported 00:14:31.957 Controller Attributes 00:14:31.957 128-bit Host Identifier: Supported 00:14:31.957 Non-Operational Permissive Mode: Not Supported 00:14:31.957 NVM Sets: Not Supported 00:14:31.957 Read Recovery Levels: Not Supported 00:14:31.957 Endurance Groups: Not Supported 00:14:31.957 Predictable Latency Mode: Not Supported 00:14:31.957 Traffic Based Keep ALive: Not Supported 00:14:31.957 Namespace Granularity: Not Supported 00:14:31.957 SQ Associations: Not Supported 00:14:31.957 UUID List: Not Supported 00:14:31.957 Multi-Domain Subsystem: Not Supported 00:14:31.957 Fixed Capacity Management: Not Supported 00:14:31.958 Variable Capacity Management: Not Supported 00:14:31.958 Delete Endurance Group: Not Supported 00:14:31.958 Delete NVM Set: Not Supported 00:14:31.958 Extended LBA Formats Supported: Not Supported 00:14:31.958 Flexible Data Placement Supported: Not Supported 00:14:31.958 00:14:31.958 Controller Memory Buffer Support 00:14:31.958 ================================ 00:14:31.958 Supported: No 00:14:31.958 00:14:31.958 Persistent Memory Region Support 00:14:31.958 ================================ 00:14:31.958 Supported: No 00:14:31.958 00:14:31.958 Admin Command Set Attributes 00:14:31.958 ============================ 00:14:31.958 Security Send/Receive: Not Supported 00:14:31.958 Format NVM: Not Supported 00:14:31.958 Firmware Activate/Download: Not Supported 00:14:31.958 Namespace Management: Not Supported 00:14:31.958 Device Self-Test: Not Supported 00:14:31.958 Directives: Not Supported 00:14:31.958 NVMe-MI: Not Supported 00:14:31.958 Virtualization Management: Not Supported 00:14:31.958 Doorbell Buffer Config: Not Supported 00:14:31.958 Get LBA Status Capability: Not Supported 00:14:31.958 Command & Feature Lockdown Capability: Not Supported 00:14:31.958 Abort Command Limit: 4 00:14:31.958 Async Event Request Limit: 4 00:14:31.958 Number of Firmware Slots: N/A 00:14:31.958 Firmware Slot 1 Read-Only: N/A 00:14:31.958 Firmware Activation Without Reset: N/A 00:14:31.958 Multiple Update Detection Support: N/A 00:14:31.958 Firmware Update Granularity: No Information Provided 00:14:31.958 Per-Namespace SMART Log: No 00:14:31.958 Asymmetric Namespace Access Log Page: Not Supported 00:14:31.958 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:31.958 Command Effects Log Page: Supported 00:14:31.958 Get Log Page Extended Data: Supported 00:14:31.958 Telemetry Log Pages: Not Supported 00:14:31.958 Persistent Event Log Pages: Not Supported 00:14:31.958 Supported Log Pages Log Page: May Support 00:14:31.958 Commands Supported & Effects Log Page: Not Supported 00:14:31.958 Feature Identifiers & Effects Log Page:May Support 00:14:31.958 NVMe-MI Commands & Effects Log Page: May Support 00:14:31.958 Data Area 4 for Telemetry Log: Not Supported 00:14:31.958 Error Log Page Entries Supported: 128 00:14:31.958 Keep Alive: Supported 00:14:31.958 Keep Alive Granularity: 10000 ms 00:14:31.958 00:14:31.958 NVM Command Set Attributes 00:14:31.958 ========================== 00:14:31.958 Submission Queue Entry Size 00:14:31.958 Max: 64 00:14:31.958 Min: 64 00:14:31.958 Completion Queue Entry Size 00:14:31.958 Max: 16 00:14:31.958 Min: 16 00:14:31.958 Number of Namespaces: 32 00:14:31.958 Compare Command: Supported 00:14:31.958 Write Uncorrectable Command: Not Supported 00:14:31.958 Dataset Management Command: Supported 00:14:31.958 Write Zeroes Command: Supported 00:14:31.958 Set Features Save Field: Not Supported 00:14:31.958 Reservations: Not Supported 00:14:31.958 Timestamp: Not Supported 00:14:31.958 Copy: Supported 00:14:31.958 Volatile Write Cache: Present 00:14:31.958 Atomic Write Unit (Normal): 1 00:14:31.958 Atomic Write Unit (PFail): 1 00:14:31.958 Atomic Compare & Write Unit: 1 00:14:31.958 Fused Compare & Write: Supported 00:14:31.958 Scatter-Gather List 00:14:31.958 SGL Command Set: Supported (Dword aligned) 00:14:31.958 SGL Keyed: Not Supported 00:14:31.958 SGL Bit Bucket Descriptor: Not Supported 00:14:31.958 SGL Metadata Pointer: Not Supported 00:14:31.958 Oversized SGL: Not Supported 00:14:31.958 SGL Metadata Address: Not Supported 00:14:31.958 SGL Offset: Not Supported 00:14:31.958 Transport SGL Data Block: Not Supported 00:14:31.958 Replay Protected Memory Block: Not Supported 00:14:31.958 00:14:31.958 Firmware Slot Information 00:14:31.958 ========================= 00:14:31.958 Active slot: 1 00:14:31.958 Slot 1 Firmware Revision: 24.05.1 00:14:31.958 00:14:31.958 00:14:31.958 Commands Supported and Effects 00:14:31.958 ============================== 00:14:31.958 Admin Commands 00:14:31.958 -------------- 00:14:31.958 Get Log Page (02h): Supported 00:14:31.958 Identify (06h): Supported 00:14:31.958 Abort (08h): Supported 00:14:31.958 Set Features (09h): Supported 00:14:31.958 Get Features (0Ah): Supported 00:14:31.958 Asynchronous Event Request (0Ch): Supported 00:14:31.958 Keep Alive (18h): Supported 00:14:31.958 I/O Commands 00:14:31.958 ------------ 00:14:31.958 Flush (00h): Supported LBA-Change 00:14:31.958 Write (01h): Supported LBA-Change 00:14:31.958 Read (02h): Supported 00:14:31.958 Compare (05h): Supported 00:14:31.958 Write Zeroes (08h): Supported LBA-Change 00:14:31.958 Dataset Management (09h): Supported LBA-Change 00:14:31.958 Copy (19h): Supported LBA-Change 00:14:31.958 Unknown (79h): Supported LBA-Change 00:14:31.958 Unknown (7Ah): Supported 00:14:31.958 00:14:31.958 Error Log 00:14:31.958 ========= 00:14:31.958 00:14:31.958 Arbitration 00:14:31.958 =========== 00:14:31.958 Arbitration Burst: 1 00:14:31.958 00:14:31.958 Power Management 00:14:31.958 ================ 00:14:31.958 Number of Power States: 1 00:14:31.958 Current Power State: Power State #0 00:14:31.958 Power State #0: 00:14:31.958 Max Power: 0.00 W 00:14:31.958 Non-Operational State: Operational 00:14:31.958 Entry Latency: Not Reported 00:14:31.958 Exit Latency: Not Reported 00:14:31.958 Relative Read Throughput: 0 00:14:31.958 Relative Read Latency: 0 00:14:31.958 Relative Write Throughput: 0 00:14:31.958 Relative Write Latency: 0 00:14:31.958 Idle Power: Not Reported 00:14:31.958 Active Power: Not Reported 00:14:31.958 Non-Operational Permissive Mode: Not Supported 00:14:31.958 00:14:31.958 Health Information 00:14:31.958 ================== 00:14:31.958 Critical Warnings: 00:14:31.958 Available Spare Space: OK 00:14:31.958 Temperature: OK 00:14:31.958 Device Reliability: OK 00:14:31.958 Read Only: No 00:14:31.958 Volatile Memory Backup: OK 00:14:31.958 Current Temperature: 0 Kelvin[2024-07-20 18:44:42.184901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:31.958 [2024-07-20 18:44:42.184920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:31.958 [2024-07-20 18:44:42.184957] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:31.958 [2024-07-20 18:44:42.184975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.958 [2024-07-20 18:44:42.184987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.958 [2024-07-20 18:44:42.184997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.959 [2024-07-20 18:44:42.185007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.959 [2024-07-20 18:44:42.188804] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:31.959 [2024-07-20 18:44:42.188827] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:31.959 [2024-07-20 18:44:42.189521] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:31.959 [2024-07-20 18:44:42.189607] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:31.959 [2024-07-20 18:44:42.189622] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:31.959 [2024-07-20 18:44:42.190532] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:31.959 [2024-07-20 18:44:42.190554] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:31.959 [2024-07-20 18:44:42.190606] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:31.959 [2024-07-20 18:44:42.192578] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:31.959 (-273 Celsius) 00:14:31.959 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:31.959 Available Spare: 0% 00:14:31.959 Available Spare Threshold: 0% 00:14:31.959 Life Percentage Used: 0% 00:14:31.959 Data Units Read: 0 00:14:31.959 Data Units Written: 0 00:14:31.959 Host Read Commands: 0 00:14:31.959 Host Write Commands: 0 00:14:31.959 Controller Busy Time: 0 minutes 00:14:31.959 Power Cycles: 0 00:14:31.959 Power On Hours: 0 hours 00:14:31.959 Unsafe Shutdowns: 0 00:14:31.959 Unrecoverable Media Errors: 0 00:14:31.959 Lifetime Error Log Entries: 0 00:14:31.959 Warning Temperature Time: 0 minutes 00:14:31.959 Critical Temperature Time: 0 minutes 00:14:31.959 00:14:31.959 Number of Queues 00:14:31.959 ================ 00:14:31.959 Number of I/O Submission Queues: 127 00:14:31.959 Number of I/O Completion Queues: 127 00:14:31.959 00:14:31.959 Active Namespaces 00:14:31.959 ================= 00:14:31.959 Namespace ID:1 00:14:31.959 Error Recovery Timeout: Unlimited 00:14:31.959 Command Set Identifier: NVM (00h) 00:14:31.959 Deallocate: Supported 00:14:31.959 Deallocated/Unwritten Error: Not Supported 00:14:31.959 Deallocated Read Value: Unknown 00:14:31.959 Deallocate in Write Zeroes: Not Supported 00:14:31.959 Deallocated Guard Field: 0xFFFF 00:14:31.959 Flush: Supported 00:14:31.959 Reservation: Supported 00:14:31.959 Namespace Sharing Capabilities: Multiple Controllers 00:14:31.959 Size (in LBAs): 131072 (0GiB) 00:14:31.959 Capacity (in LBAs): 131072 (0GiB) 00:14:31.959 Utilization (in LBAs): 131072 (0GiB) 00:14:31.959 NGUID: 41A8A81CB4E64E9294EFD9889C49C0D5 00:14:31.959 UUID: 41a8a81c-b4e6-4e92-94ef-d9889c49c0d5 00:14:31.959 Thin Provisioning: Not Supported 00:14:31.959 Per-NS Atomic Units: Yes 00:14:31.959 Atomic Boundary Size (Normal): 0 00:14:31.959 Atomic Boundary Size (PFail): 0 00:14:31.959 Atomic Boundary Offset: 0 00:14:31.959 Maximum Single Source Range Length: 65535 00:14:31.959 Maximum Copy Length: 65535 00:14:31.959 Maximum Source Range Count: 1 00:14:31.959 NGUID/EUI64 Never Reused: No 00:14:31.959 Namespace Write Protected: No 00:14:31.959 Number of LBA Formats: 1 00:14:31.959 Current LBA Format: LBA Format #00 00:14:31.959 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:31.959 00:14:31.959 18:44:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:31.959 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.217 [2024-07-20 18:44:42.421662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:37.476 Initializing NVMe Controllers 00:14:37.476 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:37.476 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:37.476 Initialization complete. Launching workers. 00:14:37.476 ======================================================== 00:14:37.476 Latency(us) 00:14:37.476 Device Information : IOPS MiB/s Average min max 00:14:37.476 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35202.44 137.51 3636.03 1138.37 7667.37 00:14:37.476 ======================================================== 00:14:37.476 Total : 35202.44 137.51 3636.03 1138.37 7667.37 00:14:37.476 00:14:37.476 [2024-07-20 18:44:47.444669] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:37.476 18:44:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:37.476 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.476 [2024-07-20 18:44:47.690885] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:42.734 Initializing NVMe Controllers 00:14:42.734 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:42.734 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:42.734 Initialization complete. Launching workers. 00:14:42.734 ======================================================== 00:14:42.734 Latency(us) 00:14:42.734 Device Information : IOPS MiB/s Average min max 00:14:42.734 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.18 62.70 7984.45 4976.64 11973.98 00:14:42.734 ======================================================== 00:14:42.734 Total : 16051.18 62.70 7984.45 4976.64 11973.98 00:14:42.734 00:14:42.734 [2024-07-20 18:44:52.728386] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:42.734 18:44:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:42.734 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.734 [2024-07-20 18:44:52.939407] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:47.989 [2024-07-20 18:44:58.023219] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:47.989 Initializing NVMe Controllers 00:14:47.989 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:47.989 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:47.989 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:47.989 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:47.989 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:47.989 Initialization complete. Launching workers. 00:14:47.989 Starting thread on core 2 00:14:47.989 Starting thread on core 3 00:14:47.989 Starting thread on core 1 00:14:47.990 18:44:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:47.990 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.247 [2024-07-20 18:44:58.331260] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:51.588 [2024-07-20 18:45:01.393802] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:51.588 Initializing NVMe Controllers 00:14:51.588 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:51.588 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:51.588 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:51.588 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:51.588 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:51.588 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:51.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:51.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:51.588 Initialization complete. Launching workers. 00:14:51.588 Starting thread on core 1 with urgent priority queue 00:14:51.588 Starting thread on core 2 with urgent priority queue 00:14:51.588 Starting thread on core 3 with urgent priority queue 00:14:51.588 Starting thread on core 0 with urgent priority queue 00:14:51.588 SPDK bdev Controller (SPDK1 ) core 0: 5354.00 IO/s 18.68 secs/100000 ios 00:14:51.588 SPDK bdev Controller (SPDK1 ) core 1: 5500.67 IO/s 18.18 secs/100000 ios 00:14:51.588 SPDK bdev Controller (SPDK1 ) core 2: 5411.33 IO/s 18.48 secs/100000 ios 00:14:51.588 SPDK bdev Controller (SPDK1 ) core 3: 5369.00 IO/s 18.63 secs/100000 ios 00:14:51.588 ======================================================== 00:14:51.588 00:14:51.588 18:45:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:51.588 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.588 [2024-07-20 18:45:01.687339] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:51.588 Initializing NVMe Controllers 00:14:51.588 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:51.588 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:51.588 Namespace ID: 1 size: 0GB 00:14:51.588 Initialization complete. 00:14:51.588 INFO: using host memory buffer for IO 00:14:51.588 Hello world! 00:14:51.588 [2024-07-20 18:45:01.720868] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:51.588 18:45:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:51.588 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.845 [2024-07-20 18:45:02.016265] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.780 Initializing NVMe Controllers 00:14:52.780 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.780 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.780 Initialization complete. Launching workers. 00:14:52.780 submit (in ns) avg, min, max = 7529.6, 3521.1, 4013314.4 00:14:52.780 complete (in ns) avg, min, max = 25281.6, 2058.9, 4016606.7 00:14:52.780 00:14:52.780 Submit histogram 00:14:52.780 ================ 00:14:52.780 Range in us Cumulative Count 00:14:52.780 3.508 - 3.532: 0.0148% ( 2) 00:14:52.780 3.532 - 3.556: 0.0593% ( 6) 00:14:52.780 3.556 - 3.579: 0.3261% ( 36) 00:14:52.780 3.579 - 3.603: 0.7115% ( 52) 00:14:52.780 3.603 - 3.627: 1.5713% ( 116) 00:14:52.780 3.627 - 3.650: 3.1204% ( 209) 00:14:52.780 3.650 - 3.674: 6.0851% ( 400) 00:14:52.780 3.674 - 3.698: 10.3913% ( 581) 00:14:52.780 3.698 - 3.721: 18.2330% ( 1058) 00:14:52.780 3.721 - 3.745: 27.3718% ( 1233) 00:14:52.780 3.745 - 3.769: 38.2078% ( 1462) 00:14:52.780 3.769 - 3.793: 47.0649% ( 1195) 00:14:52.780 3.793 - 3.816: 54.9437% ( 1063) 00:14:52.780 3.816 - 3.840: 60.5470% ( 756) 00:14:52.780 3.840 - 3.864: 65.6834% ( 693) 00:14:52.780 3.864 - 3.887: 69.8710% ( 565) 00:14:52.780 3.887 - 3.911: 73.2138% ( 451) 00:14:52.780 3.911 - 3.935: 76.4972% ( 443) 00:14:52.780 3.935 - 3.959: 79.7732% ( 442) 00:14:52.780 3.959 - 3.982: 82.9529% ( 429) 00:14:52.780 3.982 - 4.006: 86.0658% ( 420) 00:14:52.780 4.006 - 4.030: 88.6155% ( 344) 00:14:52.780 4.030 - 4.053: 90.5203% ( 257) 00:14:52.780 4.053 - 4.077: 92.1509% ( 220) 00:14:52.780 4.077 - 4.101: 93.3220% ( 158) 00:14:52.780 4.101 - 4.124: 94.3448% ( 138) 00:14:52.780 4.124 - 4.148: 95.1601% ( 110) 00:14:52.780 4.148 - 4.172: 95.7234% ( 76) 00:14:52.780 4.172 - 4.196: 96.0940% ( 50) 00:14:52.780 4.196 - 4.219: 96.5387% ( 60) 00:14:52.780 4.219 - 4.243: 96.8574% ( 43) 00:14:52.780 4.243 - 4.267: 97.0205% ( 22) 00:14:52.780 4.267 - 4.290: 97.1687% ( 20) 00:14:52.780 4.290 - 4.314: 97.2576% ( 12) 00:14:52.780 4.314 - 4.338: 97.3095% ( 7) 00:14:52.780 4.338 - 4.361: 97.3762% ( 9) 00:14:52.780 4.361 - 4.385: 97.4281% ( 7) 00:14:52.780 4.385 - 4.409: 97.4948% ( 9) 00:14:52.780 4.409 - 4.433: 97.5838% ( 12) 00:14:52.780 4.433 - 4.456: 97.6356% ( 7) 00:14:52.780 4.456 - 4.480: 97.6430% ( 1) 00:14:52.780 4.480 - 4.504: 97.6801% ( 5) 00:14:52.780 4.504 - 4.527: 97.7172% ( 5) 00:14:52.780 4.527 - 4.551: 97.7246% ( 1) 00:14:52.780 4.551 - 4.575: 97.7542% ( 4) 00:14:52.780 4.575 - 4.599: 97.7690% ( 2) 00:14:52.780 4.599 - 4.622: 97.7765% ( 1) 00:14:52.780 4.646 - 4.670: 97.7839% ( 1) 00:14:52.780 4.670 - 4.693: 97.7913% ( 1) 00:14:52.780 4.741 - 4.764: 97.7987% ( 1) 00:14:52.780 4.764 - 4.788: 97.8209% ( 3) 00:14:52.780 4.788 - 4.812: 97.8358% ( 2) 00:14:52.780 4.812 - 4.836: 97.8506% ( 2) 00:14:52.780 4.836 - 4.859: 97.8802% ( 4) 00:14:52.780 4.859 - 4.883: 97.9099% ( 4) 00:14:52.780 4.883 - 4.907: 97.9543% ( 6) 00:14:52.780 4.907 - 4.930: 98.0062% ( 7) 00:14:52.780 4.930 - 4.954: 98.0285% ( 3) 00:14:52.780 4.954 - 4.978: 98.0878% ( 8) 00:14:52.780 4.978 - 5.001: 98.1471% ( 8) 00:14:52.780 5.001 - 5.025: 98.1767% ( 4) 00:14:52.780 5.025 - 5.049: 98.1915% ( 2) 00:14:52.780 5.049 - 5.073: 98.2286% ( 5) 00:14:52.780 5.073 - 5.096: 98.2508% ( 3) 00:14:52.780 5.096 - 5.120: 98.2805% ( 4) 00:14:52.780 5.120 - 5.144: 98.3249% ( 6) 00:14:52.780 5.144 - 5.167: 98.3323% ( 1) 00:14:52.780 5.167 - 5.191: 98.3546% ( 3) 00:14:52.780 5.191 - 5.215: 98.3842% ( 4) 00:14:52.780 5.215 - 5.239: 98.3991% ( 2) 00:14:52.780 5.239 - 5.262: 98.4287% ( 4) 00:14:52.780 5.262 - 5.286: 98.4361% ( 1) 00:14:52.780 5.310 - 5.333: 98.4583% ( 3) 00:14:52.780 5.357 - 5.381: 98.4806% ( 3) 00:14:52.780 5.404 - 5.428: 98.4880% ( 1) 00:14:52.780 5.499 - 5.523: 98.4954% ( 1) 00:14:52.780 5.547 - 5.570: 98.5028% ( 1) 00:14:52.780 5.594 - 5.618: 98.5102% ( 1) 00:14:52.780 5.618 - 5.641: 98.5251% ( 2) 00:14:52.780 5.641 - 5.665: 98.5325% ( 1) 00:14:52.780 5.713 - 5.736: 98.5399% ( 1) 00:14:52.780 5.736 - 5.760: 98.5473% ( 1) 00:14:52.780 5.784 - 5.807: 98.5547% ( 1) 00:14:52.780 5.807 - 5.831: 98.5695% ( 2) 00:14:52.780 5.879 - 5.902: 98.5843% ( 2) 00:14:52.780 5.902 - 5.926: 98.5992% ( 2) 00:14:52.780 5.926 - 5.950: 98.6066% ( 1) 00:14:52.780 6.021 - 6.044: 98.6214% ( 2) 00:14:52.780 6.044 - 6.068: 98.6362% ( 2) 00:14:52.780 6.068 - 6.116: 98.6436% ( 1) 00:14:52.780 6.116 - 6.163: 98.6659% ( 3) 00:14:52.780 6.163 - 6.210: 98.6733% ( 1) 00:14:52.780 6.210 - 6.258: 98.6807% ( 1) 00:14:52.780 6.305 - 6.353: 98.6881% ( 1) 00:14:52.780 6.353 - 6.400: 98.6955% ( 1) 00:14:52.780 6.400 - 6.447: 98.7029% ( 1) 00:14:52.780 6.447 - 6.495: 98.7103% ( 1) 00:14:52.780 6.590 - 6.637: 98.7178% ( 1) 00:14:52.780 6.684 - 6.732: 98.7326% ( 2) 00:14:52.780 6.732 - 6.779: 98.7400% ( 1) 00:14:52.780 6.827 - 6.874: 98.7474% ( 1) 00:14:52.780 6.874 - 6.921: 98.7622% ( 2) 00:14:52.780 6.921 - 6.969: 98.7696% ( 1) 00:14:52.780 6.969 - 7.016: 98.7771% ( 1) 00:14:52.780 7.016 - 7.064: 98.7919% ( 2) 00:14:52.780 7.111 - 7.159: 98.7993% ( 1) 00:14:52.780 7.206 - 7.253: 98.8067% ( 1) 00:14:52.780 7.301 - 7.348: 98.8215% ( 2) 00:14:52.780 7.348 - 7.396: 98.8289% ( 1) 00:14:52.780 7.396 - 7.443: 98.8512% ( 3) 00:14:52.780 7.585 - 7.633: 98.8586% ( 1) 00:14:52.780 7.680 - 7.727: 98.8660% ( 1) 00:14:52.780 7.727 - 7.775: 98.8882% ( 3) 00:14:52.780 7.822 - 7.870: 98.9031% ( 2) 00:14:52.780 7.870 - 7.917: 98.9105% ( 1) 00:14:52.780 7.917 - 7.964: 98.9179% ( 1) 00:14:52.780 8.012 - 8.059: 98.9253% ( 1) 00:14:52.780 8.059 - 8.107: 98.9327% ( 1) 00:14:52.780 8.154 - 8.201: 98.9401% ( 1) 00:14:52.780 8.249 - 8.296: 98.9475% ( 1) 00:14:52.780 8.296 - 8.344: 98.9623% ( 2) 00:14:52.780 8.439 - 8.486: 98.9698% ( 1) 00:14:52.780 8.486 - 8.533: 98.9772% ( 1) 00:14:52.780 8.581 - 8.628: 98.9846% ( 1) 00:14:52.780 8.723 - 8.770: 98.9920% ( 1) 00:14:52.780 8.770 - 8.818: 98.9994% ( 1) 00:14:52.780 8.818 - 8.865: 99.0068% ( 1) 00:14:52.780 8.913 - 8.960: 99.0142% ( 1) 00:14:52.780 9.007 - 9.055: 99.0216% ( 1) 00:14:52.780 9.102 - 9.150: 99.0291% ( 1) 00:14:52.780 9.150 - 9.197: 99.0439% ( 2) 00:14:52.780 9.197 - 9.244: 99.0513% ( 1) 00:14:52.780 9.244 - 9.292: 99.0587% ( 1) 00:14:52.780 9.339 - 9.387: 99.0661% ( 1) 00:14:52.780 9.434 - 9.481: 99.0809% ( 2) 00:14:52.780 9.481 - 9.529: 99.0883% ( 1) 00:14:52.780 9.624 - 9.671: 99.0958% ( 1) 00:14:52.780 9.671 - 9.719: 99.1032% ( 1) 00:14:52.780 9.813 - 9.861: 99.1106% ( 1) 00:14:52.780 10.003 - 10.050: 99.1180% ( 1) 00:14:52.780 10.193 - 10.240: 99.1254% ( 1) 00:14:52.780 10.287 - 10.335: 99.1328% ( 1) 00:14:52.780 10.809 - 10.856: 99.1402% ( 1) 00:14:52.780 10.904 - 10.951: 99.1476% ( 1) 00:14:52.780 11.283 - 11.330: 99.1625% ( 2) 00:14:52.780 11.330 - 11.378: 99.1773% ( 2) 00:14:52.780 11.473 - 11.520: 99.1847% ( 1) 00:14:52.780 11.615 - 11.662: 99.1921% ( 1) 00:14:52.780 11.804 - 11.852: 99.1995% ( 1) 00:14:52.780 12.041 - 12.089: 99.2069% ( 1) 00:14:52.780 12.516 - 12.610: 99.2143% ( 1) 00:14:52.780 12.610 - 12.705: 99.2218% ( 1) 00:14:52.780 12.800 - 12.895: 99.2514% ( 4) 00:14:52.780 12.895 - 12.990: 99.2588% ( 1) 00:14:52.780 12.990 - 13.084: 99.2736% ( 2) 00:14:52.780 13.084 - 13.179: 99.2811% ( 1) 00:14:52.780 13.464 - 13.559: 99.2885% ( 1) 00:14:52.780 13.559 - 13.653: 99.2959% ( 1) 00:14:52.780 13.653 - 13.748: 99.3033% ( 1) 00:14:52.780 13.938 - 14.033: 99.3107% ( 1) 00:14:52.780 14.033 - 14.127: 99.3181% ( 1) 00:14:52.780 14.317 - 14.412: 99.3255% ( 1) 00:14:52.780 14.507 - 14.601: 99.3329% ( 1) 00:14:52.780 14.696 - 14.791: 99.3403% ( 1) 00:14:52.780 14.981 - 15.076: 99.3478% ( 1) 00:14:52.780 17.541 - 17.636: 99.3626% ( 2) 00:14:52.780 17.730 - 17.825: 99.3774% ( 2) 00:14:52.780 17.825 - 17.920: 99.3922% ( 2) 00:14:52.780 17.920 - 18.015: 99.4145% ( 3) 00:14:52.780 18.015 - 18.110: 99.4367% ( 3) 00:14:52.780 18.110 - 18.204: 99.4812% ( 6) 00:14:52.780 18.204 - 18.299: 99.5034% ( 3) 00:14:52.780 18.299 - 18.394: 99.5553% ( 7) 00:14:52.780 18.394 - 18.489: 99.5998% ( 6) 00:14:52.780 18.489 - 18.584: 99.6220% ( 3) 00:14:52.780 18.584 - 18.679: 99.6516% ( 4) 00:14:52.781 18.679 - 18.773: 99.6665% ( 2) 00:14:52.781 18.773 - 18.868: 99.7035% ( 5) 00:14:52.781 18.868 - 18.963: 99.7109% ( 1) 00:14:52.781 18.963 - 19.058: 99.7184% ( 1) 00:14:52.781 19.058 - 19.153: 99.7480% ( 4) 00:14:52.781 19.153 - 19.247: 99.7554% ( 1) 00:14:52.781 19.247 - 19.342: 99.7702% ( 2) 00:14:52.781 19.342 - 19.437: 99.7776% ( 1) 00:14:52.781 19.437 - 19.532: 99.7851% ( 1) 00:14:52.781 19.532 - 19.627: 99.8147% ( 4) 00:14:52.781 19.627 - 19.721: 99.8221% ( 1) 00:14:52.781 19.911 - 20.006: 99.8295% ( 1) 00:14:52.781 20.575 - 20.670: 99.8369% ( 1) 00:14:52.781 20.764 - 20.859: 99.8444% ( 1) 00:14:52.781 20.954 - 21.049: 99.8518% ( 1) 00:14:52.781 21.807 - 21.902: 99.8592% ( 1) 00:14:52.781 22.471 - 22.566: 99.8666% ( 1) 00:14:52.781 24.273 - 24.462: 99.8814% ( 2) 00:14:52.781 27.307 - 27.496: 99.8888% ( 1) 00:14:52.781 29.393 - 29.582: 99.8962% ( 1) 00:14:52.781 35.271 - 35.461: 99.9036% ( 1) 00:14:52.781 49.304 - 49.683: 99.9111% ( 1) 00:14:52.781 3980.705 - 4004.978: 99.9926% ( 11) 00:14:52.781 4004.978 - 4029.250: 100.0000% ( 1) 00:14:52.781 00:14:52.781 Complete histogram 00:14:52.781 ================== 00:14:52.781 Range in us Cumulative Count 00:14:52.781 2.050 - 2.062: 0.1408% ( 19) 00:14:52.781 2.062 - 2.074: 3.6911% ( 479) 00:14:52.781 2.074 - 2.086: 4.6101% ( 124) 00:14:52.781 2.086 - 2.098: 4.9511% ( 46) 00:14:52.781 2.098 - 2.110: 6.1444% ( 161) 00:14:52.781 2.110 - 2.121: 6.3890% ( 33) 00:14:52.781 2.121 - 2.133: 18.0033% ( 1567) 00:14:52.781 2.133 - 2.145: 45.3380% ( 3688) 00:14:52.781 2.145 - 2.157: 49.4145% ( 550) 00:14:52.781 2.157 - 2.169: 56.5520% ( 963) 00:14:52.781 2.169 - 2.181: 66.5802% ( 1353) 00:14:52.781 2.181 - 2.193: 68.8630% ( 308) 00:14:52.781 2.193 - 2.204: 74.7406% ( 793) 00:14:52.781 2.204 - 2.216: 84.4426% ( 1309) 00:14:52.781 2.216 - 2.228: 86.0139% ( 212) 00:14:52.781 2.228 - 2.240: 87.8891% ( 253) 00:14:52.781 2.240 - 2.252: 92.1435% ( 574) 00:14:52.781 2.252 - 2.264: 93.3146% ( 158) 00:14:52.781 2.264 - 2.276: 94.0631% ( 101) 00:14:52.781 2.276 - 2.287: 94.7895% ( 98) 00:14:52.781 2.287 - 2.299: 95.7086% ( 124) 00:14:52.781 2.299 - 2.311: 96.3460% ( 86) 00:14:52.781 2.311 - 2.323: 96.5683% ( 30) 00:14:52.781 2.323 - 2.335: 96.7166% ( 20) 00:14:52.781 2.335 - 2.347: 96.7610% ( 6) 00:14:52.781 2.347 - 2.359: 96.8352% ( 10) 00:14:52.781 2.359 - 2.370: 96.9463% ( 15) 00:14:52.781 2.370 - 2.382: 97.1390% ( 26) 00:14:52.781 2.382 - 2.394: 97.2428% ( 14) 00:14:52.781 2.394 - 2.406: 97.2725% ( 4) 00:14:52.781 2.406 - 2.418: 97.3169% ( 6) 00:14:52.781 2.418 - 2.430: 97.3910% ( 10) 00:14:52.781 2.430 - 2.441: 97.5541% ( 22) 00:14:52.781 2.441 - 2.453: 97.7098% ( 21) 00:14:52.781 2.453 - 2.465: 97.9173% ( 28) 00:14:52.781 2.465 - 2.477: 98.1100% ( 26) 00:14:52.781 2.477 - 2.489: 98.2286% ( 16) 00:14:52.781 2.489 - 2.501: 98.3398% ( 15) 00:14:52.781 2.501 - 2.513: 98.3768% ( 5) 00:14:52.781 2.513 - 2.524: 98.4287% ( 7) 00:14:52.781 2.524 - 2.536: 98.4806% ( 7) 00:14:52.781 2.536 - 2.548: 98.5251% ( 6) 00:14:52.781 2.548 - 2.560: 98.5399% ( 2) 00:14:52.781 2.560 - 2.572: 98.5473% ( 1) 00:14:52.781 2.572 - 2.584: 98.5547% ( 1) 00:14:52.781 2.584 - 2.596: 98.5621% ( 1) 00:14:52.781 2.596 - 2.607: 98.5695% ( 1) 00:14:52.781 2.619 - 2.631: 98.5918% ( 3) 00:14:52.781 2.773 - 2.785: 98.5992% ( 1) 00:14:52.781 2.963 - 2.975: 98.6066% ( 1) 00:14:52.781 3.176 - 3.200: 98.6140% ( 1) 00:14:52.781 3.224 - 3.247: 9[2024-07-20 18:45:03.038491] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.781 8.6214% ( 1) 00:14:52.781 3.319 - 3.342: 98.6362% ( 2) 00:14:52.781 3.366 - 3.390: 98.6511% ( 2) 00:14:52.781 3.484 - 3.508: 98.6807% ( 4) 00:14:52.781 3.508 - 3.532: 98.7029% ( 3) 00:14:52.781 3.556 - 3.579: 98.7103% ( 1) 00:14:52.781 3.579 - 3.603: 98.7178% ( 1) 00:14:52.781 3.603 - 3.627: 98.7326% ( 2) 00:14:52.781 3.627 - 3.650: 98.7474% ( 2) 00:14:52.781 3.721 - 3.745: 98.7548% ( 1) 00:14:52.781 3.745 - 3.769: 98.7845% ( 4) 00:14:52.781 3.840 - 3.864: 98.7919% ( 1) 00:14:52.781 3.911 - 3.935: 98.7993% ( 1) 00:14:52.781 3.935 - 3.959: 98.8067% ( 1) 00:14:52.781 3.959 - 3.982: 98.8141% ( 1) 00:14:52.781 4.053 - 4.077: 98.8215% ( 1) 00:14:52.781 4.124 - 4.148: 98.8289% ( 1) 00:14:52.781 4.385 - 4.409: 98.8363% ( 1) 00:14:52.781 4.859 - 4.883: 98.8438% ( 1) 00:14:52.781 4.954 - 4.978: 98.8586% ( 2) 00:14:52.781 5.286 - 5.310: 98.8660% ( 1) 00:14:52.781 5.357 - 5.381: 98.8734% ( 1) 00:14:52.781 5.381 - 5.404: 98.8808% ( 1) 00:14:52.781 5.523 - 5.547: 98.8882% ( 1) 00:14:52.781 5.689 - 5.713: 98.9031% ( 2) 00:14:52.781 5.902 - 5.926: 98.9105% ( 1) 00:14:52.781 5.950 - 5.973: 98.9179% ( 1) 00:14:52.781 6.044 - 6.068: 98.9253% ( 1) 00:14:52.781 6.068 - 6.116: 98.9327% ( 1) 00:14:52.781 6.163 - 6.210: 98.9401% ( 1) 00:14:52.781 6.210 - 6.258: 98.9475% ( 1) 00:14:52.781 6.258 - 6.305: 98.9549% ( 1) 00:14:52.781 6.353 - 6.400: 98.9623% ( 1) 00:14:52.781 6.400 - 6.447: 98.9698% ( 1) 00:14:52.781 6.495 - 6.542: 98.9772% ( 1) 00:14:52.781 6.874 - 6.921: 98.9846% ( 1) 00:14:52.781 7.064 - 7.111: 98.9920% ( 1) 00:14:52.781 7.348 - 7.396: 98.9994% ( 1) 00:14:52.781 7.585 - 7.633: 99.0068% ( 1) 00:14:52.781 8.249 - 8.296: 99.0142% ( 1) 00:14:52.781 9.861 - 9.908: 99.0216% ( 1) 00:14:52.781 15.644 - 15.739: 99.0291% ( 1) 00:14:52.781 15.739 - 15.834: 99.0365% ( 1) 00:14:52.781 15.834 - 15.929: 99.0439% ( 1) 00:14:52.781 15.929 - 16.024: 99.0513% ( 1) 00:14:52.781 16.024 - 16.119: 99.0587% ( 1) 00:14:52.781 16.119 - 16.213: 99.1032% ( 6) 00:14:52.781 16.213 - 16.308: 99.1328% ( 4) 00:14:52.781 16.308 - 16.403: 99.1476% ( 2) 00:14:52.781 16.498 - 16.593: 99.1773% ( 4) 00:14:52.781 16.593 - 16.687: 99.2218% ( 6) 00:14:52.781 16.687 - 16.782: 99.2366% ( 2) 00:14:52.781 16.782 - 16.877: 99.2662% ( 4) 00:14:52.781 16.877 - 16.972: 99.2736% ( 1) 00:14:52.781 16.972 - 17.067: 99.3107% ( 5) 00:14:52.781 17.067 - 17.161: 99.3329% ( 3) 00:14:52.781 17.161 - 17.256: 99.3626% ( 4) 00:14:52.781 17.351 - 17.446: 99.3700% ( 1) 00:14:52.781 17.541 - 17.636: 99.3774% ( 1) 00:14:52.781 17.636 - 17.730: 99.3922% ( 2) 00:14:52.781 17.730 - 17.825: 99.4071% ( 2) 00:14:52.781 17.825 - 17.920: 99.4145% ( 1) 00:14:52.781 18.110 - 18.204: 99.4219% ( 1) 00:14:52.781 3009.801 - 3021.938: 99.4293% ( 1) 00:14:52.781 3980.705 - 4004.978: 99.9629% ( 72) 00:14:52.781 4004.978 - 4029.250: 100.0000% ( 5) 00:14:52.781 00:14:52.781 18:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:52.781 18:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:52.781 18:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:52.781 18:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:52.781 18:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:53.037 [ 00:14:53.037 { 00:14:53.037 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:53.037 "subtype": "Discovery", 00:14:53.037 "listen_addresses": [], 00:14:53.037 "allow_any_host": true, 00:14:53.037 "hosts": [] 00:14:53.037 }, 00:14:53.037 { 00:14:53.037 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:53.037 "subtype": "NVMe", 00:14:53.037 "listen_addresses": [ 00:14:53.037 { 00:14:53.037 "trtype": "VFIOUSER", 00:14:53.037 "adrfam": "IPv4", 00:14:53.037 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:53.037 "trsvcid": "0" 00:14:53.037 } 00:14:53.037 ], 00:14:53.037 "allow_any_host": true, 00:14:53.037 "hosts": [], 00:14:53.037 "serial_number": "SPDK1", 00:14:53.037 "model_number": "SPDK bdev Controller", 00:14:53.037 "max_namespaces": 32, 00:14:53.037 "min_cntlid": 1, 00:14:53.037 "max_cntlid": 65519, 00:14:53.037 "namespaces": [ 00:14:53.037 { 00:14:53.037 "nsid": 1, 00:14:53.037 "bdev_name": "Malloc1", 00:14:53.037 "name": "Malloc1", 00:14:53.037 "nguid": "41A8A81CB4E64E9294EFD9889C49C0D5", 00:14:53.037 "uuid": "41a8a81c-b4e6-4e92-94ef-d9889c49c0d5" 00:14:53.037 } 00:14:53.037 ] 00:14:53.037 }, 00:14:53.037 { 00:14:53.037 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:53.037 "subtype": "NVMe", 00:14:53.037 "listen_addresses": [ 00:14:53.037 { 00:14:53.037 "trtype": "VFIOUSER", 00:14:53.037 "adrfam": "IPv4", 00:14:53.037 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:53.037 "trsvcid": "0" 00:14:53.037 } 00:14:53.037 ], 00:14:53.037 "allow_any_host": true, 00:14:53.037 "hosts": [], 00:14:53.037 "serial_number": "SPDK2", 00:14:53.037 "model_number": "SPDK bdev Controller", 00:14:53.037 "max_namespaces": 32, 00:14:53.037 "min_cntlid": 1, 00:14:53.037 "max_cntlid": 65519, 00:14:53.037 "namespaces": [ 00:14:53.037 { 00:14:53.037 "nsid": 1, 00:14:53.037 "bdev_name": "Malloc2", 00:14:53.037 "name": "Malloc2", 00:14:53.037 "nguid": "4A12207D59504636B5762D6181FC0419", 00:14:53.037 "uuid": "4a12207d-5950-4636-b576-2d6181fc0419" 00:14:53.037 } 00:14:53.037 ] 00:14:53.037 } 00:14:53.037 ] 00:14:53.037 18:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:53.037 18:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1350430 00:14:53.037 18:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:53.037 18:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:53.037 18:45:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:14:53.037 18:45:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:53.037 18:45:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:53.037 18:45:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:14:53.037 18:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:53.037 18:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:53.293 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.293 [2024-07-20 18:45:03.512252] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:53.550 Malloc3 00:14:53.550 18:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:53.550 [2024-07-20 18:45:03.874106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:53.822 18:45:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:53.822 Asynchronous Event Request test 00:14:53.822 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:53.822 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:53.822 Registering asynchronous event callbacks... 00:14:53.822 Starting namespace attribute notice tests for all controllers... 00:14:53.822 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:53.822 aer_cb - Changed Namespace 00:14:53.822 Cleaning up... 00:14:53.822 [ 00:14:53.822 { 00:14:53.822 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:53.822 "subtype": "Discovery", 00:14:53.822 "listen_addresses": [], 00:14:53.822 "allow_any_host": true, 00:14:53.822 "hosts": [] 00:14:53.822 }, 00:14:53.822 { 00:14:53.822 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:53.822 "subtype": "NVMe", 00:14:53.822 "listen_addresses": [ 00:14:53.822 { 00:14:53.822 "trtype": "VFIOUSER", 00:14:53.822 "adrfam": "IPv4", 00:14:53.823 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:53.823 "trsvcid": "0" 00:14:53.823 } 00:14:53.823 ], 00:14:53.823 "allow_any_host": true, 00:14:53.823 "hosts": [], 00:14:53.823 "serial_number": "SPDK1", 00:14:53.823 "model_number": "SPDK bdev Controller", 00:14:53.823 "max_namespaces": 32, 00:14:53.823 "min_cntlid": 1, 00:14:53.823 "max_cntlid": 65519, 00:14:53.823 "namespaces": [ 00:14:53.823 { 00:14:53.823 "nsid": 1, 00:14:53.823 "bdev_name": "Malloc1", 00:14:53.823 "name": "Malloc1", 00:14:53.823 "nguid": "41A8A81CB4E64E9294EFD9889C49C0D5", 00:14:53.823 "uuid": "41a8a81c-b4e6-4e92-94ef-d9889c49c0d5" 00:14:53.823 }, 00:14:53.823 { 00:14:53.823 "nsid": 2, 00:14:53.823 "bdev_name": "Malloc3", 00:14:53.823 "name": "Malloc3", 00:14:53.823 "nguid": "00460E954F5546C59ADC25B2BE818C1E", 00:14:53.823 "uuid": "00460e95-4f55-46c5-9adc-25b2be818c1e" 00:14:53.823 } 00:14:53.823 ] 00:14:53.823 }, 00:14:53.823 { 00:14:53.823 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:53.823 "subtype": "NVMe", 00:14:53.823 "listen_addresses": [ 00:14:53.823 { 00:14:53.823 "trtype": "VFIOUSER", 00:14:53.823 "adrfam": "IPv4", 00:14:53.823 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:53.823 "trsvcid": "0" 00:14:53.823 } 00:14:53.823 ], 00:14:53.823 "allow_any_host": true, 00:14:53.823 "hosts": [], 00:14:53.823 "serial_number": "SPDK2", 00:14:53.823 "model_number": "SPDK bdev Controller", 00:14:53.823 "max_namespaces": 32, 00:14:53.823 "min_cntlid": 1, 00:14:53.823 "max_cntlid": 65519, 00:14:53.823 "namespaces": [ 00:14:53.823 { 00:14:53.823 "nsid": 1, 00:14:53.823 "bdev_name": "Malloc2", 00:14:53.823 "name": "Malloc2", 00:14:53.823 "nguid": "4A12207D59504636B5762D6181FC0419", 00:14:53.823 "uuid": "4a12207d-5950-4636-b576-2d6181fc0419" 00:14:53.823 } 00:14:53.823 ] 00:14:53.823 } 00:14:53.823 ] 00:14:53.823 18:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1350430 00:14:53.823 18:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:53.823 18:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:53.823 18:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:53.823 18:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:54.082 [2024-07-20 18:45:04.156151] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:54.082 [2024-07-20 18:45:04.156192] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350564 ] 00:14:54.082 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.082 [2024-07-20 18:45:04.191977] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:54.082 [2024-07-20 18:45:04.200120] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:54.082 [2024-07-20 18:45:04.200150] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa64eeb8000 00:14:54.082 [2024-07-20 18:45:04.201121] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:54.082 [2024-07-20 18:45:04.202126] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:54.082 [2024-07-20 18:45:04.203136] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:54.082 [2024-07-20 18:45:04.204143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:54.082 [2024-07-20 18:45:04.205147] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:54.082 [2024-07-20 18:45:04.206152] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:54.082 [2024-07-20 18:45:04.207159] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:54.082 [2024-07-20 18:45:04.208171] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:54.082 [2024-07-20 18:45:04.209181] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:54.082 [2024-07-20 18:45:04.209203] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa64dc6a000 00:14:54.082 [2024-07-20 18:45:04.210367] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:54.082 [2024-07-20 18:45:04.222465] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:54.082 [2024-07-20 18:45:04.222500] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:54.082 [2024-07-20 18:45:04.231650] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:54.082 [2024-07-20 18:45:04.231705] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:54.082 [2024-07-20 18:45:04.231809] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:54.082 [2024-07-20 18:45:04.231835] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:54.082 [2024-07-20 18:45:04.231845] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:54.082 [2024-07-20 18:45:04.232656] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:54.082 [2024-07-20 18:45:04.232680] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:54.082 [2024-07-20 18:45:04.232694] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:54.082 [2024-07-20 18:45:04.233665] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:54.082 [2024-07-20 18:45:04.233684] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:54.082 [2024-07-20 18:45:04.233698] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:54.082 [2024-07-20 18:45:04.234672] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:54.083 [2024-07-20 18:45:04.234692] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:54.083 [2024-07-20 18:45:04.235682] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:54.083 [2024-07-20 18:45:04.235702] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:54.083 [2024-07-20 18:45:04.235715] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:54.083 [2024-07-20 18:45:04.235728] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:54.083 [2024-07-20 18:45:04.235838] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:54.083 [2024-07-20 18:45:04.235848] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:54.083 [2024-07-20 18:45:04.235856] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:54.083 [2024-07-20 18:45:04.236697] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:54.083 [2024-07-20 18:45:04.237698] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:54.083 [2024-07-20 18:45:04.238707] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:54.083 [2024-07-20 18:45:04.239696] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:54.083 [2024-07-20 18:45:04.239765] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:54.083 [2024-07-20 18:45:04.240717] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:54.083 [2024-07-20 18:45:04.240736] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:54.083 [2024-07-20 18:45:04.240745] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.240769] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:54.083 [2024-07-20 18:45:04.240808] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.240833] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:54.083 [2024-07-20 18:45:04.240843] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:54.083 [2024-07-20 18:45:04.240861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:54.083 [2024-07-20 18:45:04.244810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:54.083 [2024-07-20 18:45:04.244836] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:54.083 [2024-07-20 18:45:04.244862] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:54.083 [2024-07-20 18:45:04.244870] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:54.083 [2024-07-20 18:45:04.244878] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:54.083 [2024-07-20 18:45:04.244886] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:54.083 [2024-07-20 18:45:04.244894] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:54.083 [2024-07-20 18:45:04.244906] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.244919] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.244935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:54.083 [2024-07-20 18:45:04.252805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:54.083 [2024-07-20 18:45:04.252830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.083 [2024-07-20 18:45:04.252844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.083 [2024-07-20 18:45:04.252856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.083 [2024-07-20 18:45:04.252868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.083 [2024-07-20 18:45:04.252877] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.252893] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.252908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:54.083 [2024-07-20 18:45:04.260804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:54.083 [2024-07-20 18:45:04.260822] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:54.083 [2024-07-20 18:45:04.260832] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.260844] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.260858] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.260873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:54.083 [2024-07-20 18:45:04.268803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:54.083 [2024-07-20 18:45:04.268877] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.268894] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.268907] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:54.083 [2024-07-20 18:45:04.268915] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:54.083 [2024-07-20 18:45:04.268925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:54.083 [2024-07-20 18:45:04.276801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:54.083 [2024-07-20 18:45:04.276824] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:54.083 [2024-07-20 18:45:04.276843] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.276858] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.276870] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:54.083 [2024-07-20 18:45:04.276878] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:54.083 [2024-07-20 18:45:04.276888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:54.083 [2024-07-20 18:45:04.284807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:54.083 [2024-07-20 18:45:04.284835] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.284850] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.284862] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:54.083 [2024-07-20 18:45:04.284871] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:54.083 [2024-07-20 18:45:04.284880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:54.083 [2024-07-20 18:45:04.292803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:54.083 [2024-07-20 18:45:04.292824] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.292837] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.292851] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.292861] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.292869] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.292877] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:54.083 [2024-07-20 18:45:04.292885] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:54.083 [2024-07-20 18:45:04.292893] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:54.083 [2024-07-20 18:45:04.292923] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:54.083 [2024-07-20 18:45:04.300802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:54.083 [2024-07-20 18:45:04.300828] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:54.083 [2024-07-20 18:45:04.308801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:54.083 [2024-07-20 18:45:04.308827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:54.083 [2024-07-20 18:45:04.316806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:54.083 [2024-07-20 18:45:04.316831] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:54.083 [2024-07-20 18:45:04.324805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:54.083 [2024-07-20 18:45:04.324831] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:54.083 [2024-07-20 18:45:04.324841] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:54.083 [2024-07-20 18:45:04.324847] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:54.083 [2024-07-20 18:45:04.324854] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:54.084 [2024-07-20 18:45:04.324863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:54.084 [2024-07-20 18:45:04.324875] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:54.084 [2024-07-20 18:45:04.324883] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:54.084 [2024-07-20 18:45:04.324892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:54.084 [2024-07-20 18:45:04.324902] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:54.084 [2024-07-20 18:45:04.324910] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:54.084 [2024-07-20 18:45:04.324919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:54.084 [2024-07-20 18:45:04.324930] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:54.084 [2024-07-20 18:45:04.324938] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:54.084 [2024-07-20 18:45:04.324947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:54.084 [2024-07-20 18:45:04.332805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:54.084 [2024-07-20 18:45:04.332832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:54.084 [2024-07-20 18:45:04.332848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:54.084 [2024-07-20 18:45:04.332862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:54.084 ===================================================== 00:14:54.084 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:54.084 ===================================================== 00:14:54.084 Controller Capabilities/Features 00:14:54.084 ================================ 00:14:54.084 Vendor ID: 4e58 00:14:54.084 Subsystem Vendor ID: 4e58 00:14:54.084 Serial Number: SPDK2 00:14:54.084 Model Number: SPDK bdev Controller 00:14:54.084 Firmware Version: 24.05.1 00:14:54.084 Recommended Arb Burst: 6 00:14:54.084 IEEE OUI Identifier: 8d 6b 50 00:14:54.084 Multi-path I/O 00:14:54.084 May have multiple subsystem ports: Yes 00:14:54.084 May have multiple controllers: Yes 00:14:54.084 Associated with SR-IOV VF: No 00:14:54.084 Max Data Transfer Size: 131072 00:14:54.084 Max Number of Namespaces: 32 00:14:54.084 Max Number of I/O Queues: 127 00:14:54.084 NVMe Specification Version (VS): 1.3 00:14:54.084 NVMe Specification Version (Identify): 1.3 00:14:54.084 Maximum Queue Entries: 256 00:14:54.084 Contiguous Queues Required: Yes 00:14:54.084 Arbitration Mechanisms Supported 00:14:54.084 Weighted Round Robin: Not Supported 00:14:54.084 Vendor Specific: Not Supported 00:14:54.084 Reset Timeout: 15000 ms 00:14:54.084 Doorbell Stride: 4 bytes 00:14:54.084 NVM Subsystem Reset: Not Supported 00:14:54.084 Command Sets Supported 00:14:54.084 NVM Command Set: Supported 00:14:54.084 Boot Partition: Not Supported 00:14:54.084 Memory Page Size Minimum: 4096 bytes 00:14:54.084 Memory Page Size Maximum: 4096 bytes 00:14:54.084 Persistent Memory Region: Not Supported 00:14:54.084 Optional Asynchronous Events Supported 00:14:54.084 Namespace Attribute Notices: Supported 00:14:54.084 Firmware Activation Notices: Not Supported 00:14:54.084 ANA Change Notices: Not Supported 00:14:54.084 PLE Aggregate Log Change Notices: Not Supported 00:14:54.084 LBA Status Info Alert Notices: Not Supported 00:14:54.084 EGE Aggregate Log Change Notices: Not Supported 00:14:54.084 Normal NVM Subsystem Shutdown event: Not Supported 00:14:54.084 Zone Descriptor Change Notices: Not Supported 00:14:54.084 Discovery Log Change Notices: Not Supported 00:14:54.084 Controller Attributes 00:14:54.084 128-bit Host Identifier: Supported 00:14:54.084 Non-Operational Permissive Mode: Not Supported 00:14:54.084 NVM Sets: Not Supported 00:14:54.084 Read Recovery Levels: Not Supported 00:14:54.084 Endurance Groups: Not Supported 00:14:54.084 Predictable Latency Mode: Not Supported 00:14:54.084 Traffic Based Keep ALive: Not Supported 00:14:54.084 Namespace Granularity: Not Supported 00:14:54.084 SQ Associations: Not Supported 00:14:54.084 UUID List: Not Supported 00:14:54.084 Multi-Domain Subsystem: Not Supported 00:14:54.084 Fixed Capacity Management: Not Supported 00:14:54.084 Variable Capacity Management: Not Supported 00:14:54.084 Delete Endurance Group: Not Supported 00:14:54.084 Delete NVM Set: Not Supported 00:14:54.084 Extended LBA Formats Supported: Not Supported 00:14:54.084 Flexible Data Placement Supported: Not Supported 00:14:54.084 00:14:54.084 Controller Memory Buffer Support 00:14:54.084 ================================ 00:14:54.084 Supported: No 00:14:54.084 00:14:54.084 Persistent Memory Region Support 00:14:54.084 ================================ 00:14:54.084 Supported: No 00:14:54.084 00:14:54.084 Admin Command Set Attributes 00:14:54.084 ============================ 00:14:54.084 Security Send/Receive: Not Supported 00:14:54.084 Format NVM: Not Supported 00:14:54.084 Firmware Activate/Download: Not Supported 00:14:54.084 Namespace Management: Not Supported 00:14:54.084 Device Self-Test: Not Supported 00:14:54.084 Directives: Not Supported 00:14:54.084 NVMe-MI: Not Supported 00:14:54.084 Virtualization Management: Not Supported 00:14:54.084 Doorbell Buffer Config: Not Supported 00:14:54.084 Get LBA Status Capability: Not Supported 00:14:54.084 Command & Feature Lockdown Capability: Not Supported 00:14:54.084 Abort Command Limit: 4 00:14:54.084 Async Event Request Limit: 4 00:14:54.084 Number of Firmware Slots: N/A 00:14:54.084 Firmware Slot 1 Read-Only: N/A 00:14:54.084 Firmware Activation Without Reset: N/A 00:14:54.084 Multiple Update Detection Support: N/A 00:14:54.084 Firmware Update Granularity: No Information Provided 00:14:54.084 Per-Namespace SMART Log: No 00:14:54.084 Asymmetric Namespace Access Log Page: Not Supported 00:14:54.084 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:54.084 Command Effects Log Page: Supported 00:14:54.084 Get Log Page Extended Data: Supported 00:14:54.084 Telemetry Log Pages: Not Supported 00:14:54.084 Persistent Event Log Pages: Not Supported 00:14:54.084 Supported Log Pages Log Page: May Support 00:14:54.084 Commands Supported & Effects Log Page: Not Supported 00:14:54.084 Feature Identifiers & Effects Log Page:May Support 00:14:54.084 NVMe-MI Commands & Effects Log Page: May Support 00:14:54.084 Data Area 4 for Telemetry Log: Not Supported 00:14:54.084 Error Log Page Entries Supported: 128 00:14:54.084 Keep Alive: Supported 00:14:54.084 Keep Alive Granularity: 10000 ms 00:14:54.084 00:14:54.084 NVM Command Set Attributes 00:14:54.084 ========================== 00:14:54.084 Submission Queue Entry Size 00:14:54.084 Max: 64 00:14:54.084 Min: 64 00:14:54.084 Completion Queue Entry Size 00:14:54.084 Max: 16 00:14:54.084 Min: 16 00:14:54.084 Number of Namespaces: 32 00:14:54.084 Compare Command: Supported 00:14:54.084 Write Uncorrectable Command: Not Supported 00:14:54.084 Dataset Management Command: Supported 00:14:54.084 Write Zeroes Command: Supported 00:14:54.084 Set Features Save Field: Not Supported 00:14:54.084 Reservations: Not Supported 00:14:54.084 Timestamp: Not Supported 00:14:54.084 Copy: Supported 00:14:54.084 Volatile Write Cache: Present 00:14:54.084 Atomic Write Unit (Normal): 1 00:14:54.084 Atomic Write Unit (PFail): 1 00:14:54.084 Atomic Compare & Write Unit: 1 00:14:54.084 Fused Compare & Write: Supported 00:14:54.084 Scatter-Gather List 00:14:54.084 SGL Command Set: Supported (Dword aligned) 00:14:54.084 SGL Keyed: Not Supported 00:14:54.084 SGL Bit Bucket Descriptor: Not Supported 00:14:54.084 SGL Metadata Pointer: Not Supported 00:14:54.084 Oversized SGL: Not Supported 00:14:54.084 SGL Metadata Address: Not Supported 00:14:54.084 SGL Offset: Not Supported 00:14:54.084 Transport SGL Data Block: Not Supported 00:14:54.084 Replay Protected Memory Block: Not Supported 00:14:54.084 00:14:54.084 Firmware Slot Information 00:14:54.084 ========================= 00:14:54.084 Active slot: 1 00:14:54.084 Slot 1 Firmware Revision: 24.05.1 00:14:54.084 00:14:54.084 00:14:54.084 Commands Supported and Effects 00:14:54.084 ============================== 00:14:54.084 Admin Commands 00:14:54.084 -------------- 00:14:54.084 Get Log Page (02h): Supported 00:14:54.084 Identify (06h): Supported 00:14:54.084 Abort (08h): Supported 00:14:54.084 Set Features (09h): Supported 00:14:54.084 Get Features (0Ah): Supported 00:14:54.084 Asynchronous Event Request (0Ch): Supported 00:14:54.084 Keep Alive (18h): Supported 00:14:54.084 I/O Commands 00:14:54.084 ------------ 00:14:54.084 Flush (00h): Supported LBA-Change 00:14:54.084 Write (01h): Supported LBA-Change 00:14:54.084 Read (02h): Supported 00:14:54.084 Compare (05h): Supported 00:14:54.084 Write Zeroes (08h): Supported LBA-Change 00:14:54.084 Dataset Management (09h): Supported LBA-Change 00:14:54.084 Copy (19h): Supported LBA-Change 00:14:54.084 Unknown (79h): Supported LBA-Change 00:14:54.085 Unknown (7Ah): Supported 00:14:54.085 00:14:54.085 Error Log 00:14:54.085 ========= 00:14:54.085 00:14:54.085 Arbitration 00:14:54.085 =========== 00:14:54.085 Arbitration Burst: 1 00:14:54.085 00:14:54.085 Power Management 00:14:54.085 ================ 00:14:54.085 Number of Power States: 1 00:14:54.085 Current Power State: Power State #0 00:14:54.085 Power State #0: 00:14:54.085 Max Power: 0.00 W 00:14:54.085 Non-Operational State: Operational 00:14:54.085 Entry Latency: Not Reported 00:14:54.085 Exit Latency: Not Reported 00:14:54.085 Relative Read Throughput: 0 00:14:54.085 Relative Read Latency: 0 00:14:54.085 Relative Write Throughput: 0 00:14:54.085 Relative Write Latency: 0 00:14:54.085 Idle Power: Not Reported 00:14:54.085 Active Power: Not Reported 00:14:54.085 Non-Operational Permissive Mode: Not Supported 00:14:54.085 00:14:54.085 Health Information 00:14:54.085 ================== 00:14:54.085 Critical Warnings: 00:14:54.085 Available Spare Space: OK 00:14:54.085 Temperature: OK 00:14:54.085 Device Reliability: OK 00:14:54.085 Read Only: No 00:14:54.085 Volatile Memory Backup: OK 00:14:54.085 Current Temperature: 0 Kelvin[2024-07-20 18:45:04.332980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:54.085 [2024-07-20 18:45:04.340803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:54.085 [2024-07-20 18:45:04.340846] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:54.085 [2024-07-20 18:45:04.340862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.085 [2024-07-20 18:45:04.340873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.085 [2024-07-20 18:45:04.340883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.085 [2024-07-20 18:45:04.340893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.085 [2024-07-20 18:45:04.340985] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:54.085 [2024-07-20 18:45:04.341006] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:54.085 [2024-07-20 18:45:04.341987] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:54.085 [2024-07-20 18:45:04.342057] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:54.085 [2024-07-20 18:45:04.342072] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:54.085 [2024-07-20 18:45:04.342992] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:54.085 [2024-07-20 18:45:04.343016] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:54.085 [2024-07-20 18:45:04.343066] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:54.085 [2024-07-20 18:45:04.345805] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:54.085 (-273 Celsius) 00:14:54.085 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:54.085 Available Spare: 0% 00:14:54.085 Available Spare Threshold: 0% 00:14:54.085 Life Percentage Used: 0% 00:14:54.085 Data Units Read: 0 00:14:54.085 Data Units Written: 0 00:14:54.085 Host Read Commands: 0 00:14:54.085 Host Write Commands: 0 00:14:54.085 Controller Busy Time: 0 minutes 00:14:54.085 Power Cycles: 0 00:14:54.085 Power On Hours: 0 hours 00:14:54.085 Unsafe Shutdowns: 0 00:14:54.085 Unrecoverable Media Errors: 0 00:14:54.085 Lifetime Error Log Entries: 0 00:14:54.085 Warning Temperature Time: 0 minutes 00:14:54.085 Critical Temperature Time: 0 minutes 00:14:54.085 00:14:54.085 Number of Queues 00:14:54.085 ================ 00:14:54.085 Number of I/O Submission Queues: 127 00:14:54.085 Number of I/O Completion Queues: 127 00:14:54.085 00:14:54.085 Active Namespaces 00:14:54.085 ================= 00:14:54.085 Namespace ID:1 00:14:54.085 Error Recovery Timeout: Unlimited 00:14:54.085 Command Set Identifier: NVM (00h) 00:14:54.085 Deallocate: Supported 00:14:54.085 Deallocated/Unwritten Error: Not Supported 00:14:54.085 Deallocated Read Value: Unknown 00:14:54.085 Deallocate in Write Zeroes: Not Supported 00:14:54.085 Deallocated Guard Field: 0xFFFF 00:14:54.085 Flush: Supported 00:14:54.085 Reservation: Supported 00:14:54.085 Namespace Sharing Capabilities: Multiple Controllers 00:14:54.085 Size (in LBAs): 131072 (0GiB) 00:14:54.085 Capacity (in LBAs): 131072 (0GiB) 00:14:54.085 Utilization (in LBAs): 131072 (0GiB) 00:14:54.085 NGUID: 4A12207D59504636B5762D6181FC0419 00:14:54.085 UUID: 4a12207d-5950-4636-b576-2d6181fc0419 00:14:54.085 Thin Provisioning: Not Supported 00:14:54.085 Per-NS Atomic Units: Yes 00:14:54.085 Atomic Boundary Size (Normal): 0 00:14:54.085 Atomic Boundary Size (PFail): 0 00:14:54.085 Atomic Boundary Offset: 0 00:14:54.085 Maximum Single Source Range Length: 65535 00:14:54.085 Maximum Copy Length: 65535 00:14:54.085 Maximum Source Range Count: 1 00:14:54.085 NGUID/EUI64 Never Reused: No 00:14:54.085 Namespace Write Protected: No 00:14:54.085 Number of LBA Formats: 1 00:14:54.085 Current LBA Format: LBA Format #00 00:14:54.085 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:54.085 00:14:54.085 18:45:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:54.342 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.342 [2024-07-20 18:45:04.573628] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:59.600 Initializing NVMe Controllers 00:14:59.600 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:59.600 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:59.600 Initialization complete. Launching workers. 00:14:59.600 ======================================================== 00:14:59.600 Latency(us) 00:14:59.600 Device Information : IOPS MiB/s Average min max 00:14:59.600 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35526.41 138.78 3602.40 1157.56 8264.01 00:14:59.600 ======================================================== 00:14:59.600 Total : 35526.41 138.78 3602.40 1157.56 8264.01 00:14:59.600 00:14:59.600 [2024-07-20 18:45:09.677185] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:59.600 18:45:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:59.600 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.600 [2024-07-20 18:45:09.912802] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:04.855 Initializing NVMe Controllers 00:15:04.855 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:04.855 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:04.855 Initialization complete. Launching workers. 00:15:04.855 ======================================================== 00:15:04.855 Latency(us) 00:15:04.855 Device Information : IOPS MiB/s Average min max 00:15:04.855 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34257.05 133.82 3735.78 1194.52 9738.32 00:15:04.855 ======================================================== 00:15:04.855 Total : 34257.05 133.82 3735.78 1194.52 9738.32 00:15:04.855 00:15:04.855 [2024-07-20 18:45:14.935767] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:04.855 18:45:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:04.855 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.855 [2024-07-20 18:45:15.146497] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:10.125 [2024-07-20 18:45:20.297949] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:10.125 Initializing NVMe Controllers 00:15:10.125 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:10.125 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:10.125 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:10.125 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:10.125 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:10.125 Initialization complete. Launching workers. 00:15:10.125 Starting thread on core 2 00:15:10.125 Starting thread on core 3 00:15:10.125 Starting thread on core 1 00:15:10.125 18:45:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:10.125 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.383 [2024-07-20 18:45:20.613342] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:13.680 [2024-07-20 18:45:23.686081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:13.680 Initializing NVMe Controllers 00:15:13.680 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:13.680 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:13.680 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:13.680 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:13.680 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:13.680 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:13.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:13.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:13.680 Initialization complete. Launching workers. 00:15:13.680 Starting thread on core 1 with urgent priority queue 00:15:13.680 Starting thread on core 2 with urgent priority queue 00:15:13.680 Starting thread on core 3 with urgent priority queue 00:15:13.680 Starting thread on core 0 with urgent priority queue 00:15:13.680 SPDK bdev Controller (SPDK2 ) core 0: 5720.00 IO/s 17.48 secs/100000 ios 00:15:13.680 SPDK bdev Controller (SPDK2 ) core 1: 5147.67 IO/s 19.43 secs/100000 ios 00:15:13.680 SPDK bdev Controller (SPDK2 ) core 2: 5791.33 IO/s 17.27 secs/100000 ios 00:15:13.680 SPDK bdev Controller (SPDK2 ) core 3: 4767.67 IO/s 20.97 secs/100000 ios 00:15:13.680 ======================================================== 00:15:13.680 00:15:13.680 18:45:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:13.680 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.680 [2024-07-20 18:45:23.996388] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:13.936 Initializing NVMe Controllers 00:15:13.936 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:13.936 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:13.936 Namespace ID: 1 size: 0GB 00:15:13.936 Initialization complete. 00:15:13.936 INFO: using host memory buffer for IO 00:15:13.936 Hello world! 00:15:13.936 [2024-07-20 18:45:24.006464] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:13.936 18:45:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:13.936 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.213 [2024-07-20 18:45:24.279292] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:15.144 Initializing NVMe Controllers 00:15:15.144 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:15.144 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:15.144 Initialization complete. Launching workers. 00:15:15.144 submit (in ns) avg, min, max = 8658.6, 3481.1, 4016033.3 00:15:15.144 complete (in ns) avg, min, max = 24286.6, 2060.0, 5996308.9 00:15:15.144 00:15:15.144 Submit histogram 00:15:15.144 ================ 00:15:15.144 Range in us Cumulative Count 00:15:15.144 3.461 - 3.484: 0.0073% ( 1) 00:15:15.144 3.508 - 3.532: 0.1904% ( 25) 00:15:15.144 3.532 - 3.556: 0.6592% ( 64) 00:15:15.144 3.556 - 3.579: 2.4610% ( 246) 00:15:15.144 3.579 - 3.603: 5.7277% ( 446) 00:15:15.144 3.603 - 3.627: 12.0120% ( 858) 00:15:15.144 3.627 - 3.650: 19.9736% ( 1087) 00:15:15.144 3.650 - 3.674: 29.7371% ( 1333) 00:15:15.144 3.674 - 3.698: 38.1967% ( 1155) 00:15:15.144 3.698 - 3.721: 46.4367% ( 1125) 00:15:15.144 3.721 - 3.745: 51.2708% ( 660) 00:15:15.144 3.745 - 3.769: 56.0390% ( 651) 00:15:15.144 3.769 - 3.793: 59.7964% ( 513) 00:15:15.144 3.793 - 3.816: 63.3634% ( 487) 00:15:15.144 3.816 - 3.840: 66.9963% ( 496) 00:15:15.144 3.840 - 3.864: 70.8489% ( 526) 00:15:15.144 3.864 - 3.887: 74.9945% ( 566) 00:15:15.144 3.887 - 3.911: 79.0522% ( 554) 00:15:15.144 3.911 - 3.935: 82.9342% ( 530) 00:15:15.144 3.935 - 3.959: 85.2780% ( 320) 00:15:15.144 3.959 - 3.982: 87.3141% ( 278) 00:15:15.144 3.982 - 4.006: 89.2698% ( 267) 00:15:15.144 4.006 - 4.030: 90.6467% ( 188) 00:15:15.144 4.030 - 4.053: 91.7234% ( 147) 00:15:15.144 4.053 - 4.077: 92.6024% ( 120) 00:15:15.144 4.077 - 4.101: 93.5912% ( 135) 00:15:15.144 4.101 - 4.124: 94.3968% ( 110) 00:15:15.144 4.124 - 4.148: 95.1000% ( 96) 00:15:15.144 4.148 - 4.172: 95.6933% ( 81) 00:15:15.144 4.172 - 4.196: 96.1034% ( 56) 00:15:15.144 4.196 - 4.219: 96.3451% ( 33) 00:15:15.144 4.219 - 4.243: 96.5795% ( 32) 00:15:15.144 4.243 - 4.267: 96.7260% ( 20) 00:15:15.144 4.267 - 4.290: 96.8871% ( 22) 00:15:15.144 4.290 - 4.314: 96.9750% ( 12) 00:15:15.144 4.314 - 4.338: 97.0995% ( 17) 00:15:15.144 4.338 - 4.361: 97.1655% ( 9) 00:15:15.144 4.361 - 4.385: 97.2167% ( 7) 00:15:15.144 4.385 - 4.409: 97.2534% ( 5) 00:15:15.144 4.409 - 4.433: 97.2973% ( 6) 00:15:15.144 4.433 - 4.456: 97.3412% ( 6) 00:15:15.144 4.456 - 4.480: 97.3486% ( 1) 00:15:15.144 4.480 - 4.504: 97.3705% ( 3) 00:15:15.144 4.504 - 4.527: 97.3779% ( 1) 00:15:15.144 4.527 - 4.551: 97.3925% ( 2) 00:15:15.144 4.551 - 4.575: 97.4072% ( 2) 00:15:15.144 4.599 - 4.622: 97.4145% ( 1) 00:15:15.144 4.622 - 4.646: 97.4218% ( 1) 00:15:15.144 4.670 - 4.693: 97.4365% ( 2) 00:15:15.144 4.717 - 4.741: 97.4438% ( 1) 00:15:15.144 4.741 - 4.764: 97.4658% ( 3) 00:15:15.144 4.764 - 4.788: 97.5024% ( 5) 00:15:15.144 4.788 - 4.812: 97.5537% ( 7) 00:15:15.144 4.812 - 4.836: 97.5756% ( 3) 00:15:15.144 4.836 - 4.859: 97.6196% ( 6) 00:15:15.144 4.859 - 4.883: 97.6562% ( 5) 00:15:15.144 4.883 - 4.907: 97.6708% ( 2) 00:15:15.144 4.907 - 4.930: 97.7148% ( 6) 00:15:15.144 4.930 - 4.954: 97.7294% ( 2) 00:15:15.144 4.954 - 4.978: 97.7954% ( 9) 00:15:15.144 4.978 - 5.001: 97.8832% ( 12) 00:15:15.144 5.001 - 5.025: 97.9199% ( 5) 00:15:15.144 5.025 - 5.049: 97.9638% ( 6) 00:15:15.144 5.049 - 5.073: 97.9711% ( 1) 00:15:15.144 5.073 - 5.096: 98.0078% ( 5) 00:15:15.144 5.096 - 5.120: 98.0297% ( 3) 00:15:15.144 5.120 - 5.144: 98.0883% ( 8) 00:15:15.144 5.144 - 5.167: 98.0957% ( 1) 00:15:15.144 5.191 - 5.215: 98.1176% ( 3) 00:15:15.145 5.215 - 5.239: 98.1250% ( 1) 00:15:15.145 5.239 - 5.262: 98.1469% ( 3) 00:15:15.145 5.262 - 5.286: 98.1543% ( 1) 00:15:15.145 5.286 - 5.310: 98.1616% ( 1) 00:15:15.145 5.333 - 5.357: 98.1762% ( 2) 00:15:15.145 5.357 - 5.381: 98.1909% ( 2) 00:15:15.145 5.404 - 5.428: 98.1982% ( 1) 00:15:15.145 5.428 - 5.452: 98.2055% ( 1) 00:15:15.145 5.452 - 5.476: 98.2202% ( 2) 00:15:15.145 5.523 - 5.547: 98.2275% ( 1) 00:15:15.145 5.547 - 5.570: 98.2421% ( 2) 00:15:15.145 5.831 - 5.855: 98.2495% ( 1) 00:15:15.145 6.163 - 6.210: 98.2568% ( 1) 00:15:15.145 6.210 - 6.258: 98.2641% ( 1) 00:15:15.145 6.400 - 6.447: 98.2788% ( 2) 00:15:15.145 6.447 - 6.495: 98.2861% ( 1) 00:15:15.145 6.637 - 6.684: 98.2934% ( 1) 00:15:15.145 6.684 - 6.732: 98.3007% ( 1) 00:15:15.145 6.732 - 6.779: 98.3081% ( 1) 00:15:15.145 6.827 - 6.874: 98.3227% ( 2) 00:15:15.145 6.874 - 6.921: 98.3374% ( 2) 00:15:15.145 6.969 - 7.016: 98.3593% ( 3) 00:15:15.145 7.159 - 7.206: 98.3667% ( 1) 00:15:15.145 7.253 - 7.301: 98.3813% ( 2) 00:15:15.145 7.348 - 7.396: 98.3960% ( 2) 00:15:15.145 7.396 - 7.443: 98.4033% ( 1) 00:15:15.145 7.490 - 7.538: 98.4179% ( 2) 00:15:15.145 7.585 - 7.633: 98.4253% ( 1) 00:15:15.145 7.680 - 7.727: 98.4326% ( 1) 00:15:15.145 7.727 - 7.775: 98.4399% ( 1) 00:15:15.145 7.775 - 7.822: 98.4472% ( 1) 00:15:15.145 7.822 - 7.870: 98.4546% ( 1) 00:15:15.145 7.870 - 7.917: 98.4619% ( 1) 00:15:15.145 7.964 - 8.012: 98.4692% ( 1) 00:15:15.145 8.012 - 8.059: 98.4765% ( 1) 00:15:15.145 8.059 - 8.107: 98.4838% ( 1) 00:15:15.145 8.154 - 8.201: 98.4985% ( 2) 00:15:15.145 8.249 - 8.296: 98.5058% ( 1) 00:15:15.145 8.296 - 8.344: 98.5205% ( 2) 00:15:15.145 8.344 - 8.391: 98.5424% ( 3) 00:15:15.145 8.439 - 8.486: 98.5498% ( 1) 00:15:15.145 8.676 - 8.723: 98.5571% ( 1) 00:15:15.145 8.770 - 8.818: 98.5644% ( 1) 00:15:15.145 8.865 - 8.913: 98.5717% ( 1) 00:15:15.145 8.960 - 9.007: 98.5791% ( 1) 00:15:15.145 9.007 - 9.055: 98.5864% ( 1) 00:15:15.145 9.055 - 9.102: 98.6010% ( 2) 00:15:15.145 9.197 - 9.244: 98.6157% ( 2) 00:15:15.145 9.244 - 9.292: 98.6230% ( 1) 00:15:15.145 9.481 - 9.529: 98.6303% ( 1) 00:15:15.145 9.529 - 9.576: 98.6377% ( 1) 00:15:15.145 9.576 - 9.624: 98.6596% ( 3) 00:15:15.145 9.861 - 9.908: 98.6670% ( 1) 00:15:15.145 10.335 - 10.382: 98.6743% ( 1) 00:15:15.145 10.382 - 10.430: 98.6816% ( 1) 00:15:15.145 10.524 - 10.572: 98.6889% ( 1) 00:15:15.145 10.572 - 10.619: 98.6963% ( 1) 00:15:15.145 10.714 - 10.761: 98.7036% ( 1) 00:15:15.145 10.856 - 10.904: 98.7109% ( 1) 00:15:15.145 10.999 - 11.046: 98.7182% ( 1) 00:15:15.145 11.378 - 11.425: 98.7256% ( 1) 00:15:15.145 11.757 - 11.804: 98.7402% ( 2) 00:15:15.145 11.899 - 11.947: 98.7475% ( 1) 00:15:15.145 11.947 - 11.994: 98.7549% ( 1) 00:15:15.145 11.994 - 12.041: 98.7622% ( 1) 00:15:15.145 12.136 - 12.231: 98.7695% ( 1) 00:15:15.145 12.231 - 12.326: 98.7768% ( 1) 00:15:15.145 12.705 - 12.800: 98.7915% ( 2) 00:15:15.145 12.990 - 13.084: 98.7988% ( 1) 00:15:15.145 13.369 - 13.464: 98.8134% ( 2) 00:15:15.145 13.464 - 13.559: 98.8208% ( 1) 00:15:15.145 13.559 - 13.653: 98.8354% ( 2) 00:15:15.145 13.653 - 13.748: 98.8501% ( 2) 00:15:15.145 13.748 - 13.843: 98.8574% ( 1) 00:15:15.145 14.033 - 14.127: 98.8647% ( 1) 00:15:15.145 14.127 - 14.222: 98.8794% ( 2) 00:15:15.145 14.222 - 14.317: 98.8867% ( 1) 00:15:15.145 14.317 - 14.412: 98.8940% ( 1) 00:15:15.145 14.791 - 14.886: 98.9087% ( 2) 00:15:15.145 14.981 - 15.076: 98.9160% ( 1) 00:15:15.145 15.170 - 15.265: 98.9306% ( 2) 00:15:15.145 17.161 - 17.256: 98.9380% ( 1) 00:15:15.145 17.256 - 17.351: 98.9526% ( 2) 00:15:15.145 17.351 - 17.446: 98.9673% ( 2) 00:15:15.145 17.446 - 17.541: 98.9892% ( 3) 00:15:15.145 17.541 - 17.636: 98.9966% ( 1) 00:15:15.145 17.636 - 17.730: 99.0478% ( 7) 00:15:15.145 17.730 - 17.825: 99.0771% ( 4) 00:15:15.145 17.825 - 17.920: 99.1137% ( 5) 00:15:15.145 17.920 - 18.015: 99.1723% ( 8) 00:15:15.145 18.015 - 18.110: 99.2383% ( 9) 00:15:15.145 18.110 - 18.204: 99.3188% ( 11) 00:15:15.145 18.204 - 18.299: 99.4214% ( 14) 00:15:15.145 18.299 - 18.394: 99.4653% ( 6) 00:15:15.145 18.394 - 18.489: 99.5459% ( 11) 00:15:15.145 18.489 - 18.584: 99.5898% ( 6) 00:15:15.145 18.584 - 18.679: 99.6484% ( 8) 00:15:15.145 18.679 - 18.773: 99.6777% ( 4) 00:15:15.145 18.773 - 18.868: 99.7217% ( 6) 00:15:15.145 18.868 - 18.963: 99.7363% ( 2) 00:15:15.145 18.963 - 19.058: 99.7510% ( 2) 00:15:15.145 19.153 - 19.247: 99.7729% ( 3) 00:15:15.145 19.437 - 19.532: 99.7803% ( 1) 00:15:15.145 19.627 - 19.721: 99.7949% ( 2) 00:15:15.145 19.721 - 19.816: 99.8022% ( 1) 00:15:15.145 19.816 - 19.911: 99.8169% ( 2) 00:15:15.145 19.911 - 20.006: 99.8242% ( 1) 00:15:15.145 20.006 - 20.101: 99.8315% ( 1) 00:15:15.145 20.196 - 20.290: 99.8389% ( 1) 00:15:15.145 22.092 - 22.187: 99.8462% ( 1) 00:15:15.145 22.376 - 22.471: 99.8535% ( 1) 00:15:15.145 23.419 - 23.514: 99.8608% ( 1) 00:15:15.145 23.514 - 23.609: 99.8682% ( 1) 00:15:15.145 26.359 - 26.548: 99.8755% ( 1) 00:15:15.145 26.927 - 27.117: 99.8828% ( 1) 00:15:15.145 3980.705 - 4004.978: 99.9707% ( 12) 00:15:15.145 4004.978 - 4029.250: 100.0000% ( 4) 00:15:15.145 00:15:15.145 Complete histogram 00:15:15.145 ================== 00:15:15.145 Range in us Cumulative Count 00:15:15.145 2.050 - 2.062: 0.1538% ( 21) 00:15:15.145 2.062 - 2.074: 22.7423% ( 3084) 00:15:15.145 2.074 - 2.086: 32.4178% ( 1321) 00:15:15.145 2.086 - 2.098: 34.9813% ( 350) 00:15:15.145 2.098 - 2.110: 51.8348% ( 2301) 00:15:15.145 2.110 - 2.121: 57.1596% ( 727) 00:15:15.145 2.121 - 2.133: 60.0454% ( 394) 00:15:15.145 2.133 - 2.145: 69.5452% ( 1297) 00:15:15.145 2.145 - 2.157: 71.5813% ( 278) 00:15:15.145 2.157 - 2.169: 73.5443% ( 268) 00:15:15.145 2.169 - 2.181: 78.7446% ( 710) 00:15:15.145 2.181 - 2.193: 80.3706% ( 222) 00:15:15.145 2.193 - 2.204: 81.6524% ( 175) 00:15:15.145 2.204 - 2.216: 86.0617% ( 602) 00:15:15.145 2.216 - 2.228: 88.4128% ( 321) 00:15:15.145 2.228 - 2.240: 89.8044% ( 190) 00:15:15.145 2.240 - 2.252: 92.6097% ( 383) 00:15:15.145 2.252 - 2.264: 93.5619% ( 130) 00:15:15.145 2.264 - 2.276: 93.9354% ( 51) 00:15:15.145 2.276 - 2.287: 94.4115% ( 65) 00:15:15.145 2.287 - 2.299: 95.0560% ( 88) 00:15:15.145 2.299 - 2.311: 95.5907% ( 73) 00:15:15.145 2.311 - 2.323: 95.8251% ( 32) 00:15:15.145 2.323 - 2.335: 95.8910% ( 9) 00:15:15.145 2.335 - 2.347: 96.0229% ( 18) 00:15:15.145 2.347 - 2.359: 96.3451% ( 44) 00:15:15.145 2.359 - 2.370: 96.6161% ( 37) 00:15:15.145 2.370 - 2.382: 96.9897% ( 51) 00:15:15.145 2.382 - 2.394: 97.4145% ( 58) 00:15:15.145 2.394 - 2.406: 97.6196% ( 28) 00:15:15.145 2.406 - 2.418: 97.7294% ( 15) 00:15:15.145 2.418 - 2.430: 97.8979% ( 23) 00:15:15.145 2.430 - 2.441: 98.0517% ( 21) 00:15:15.145 2.441 - 2.453: 98.1616% ( 15) 00:15:15.145 2.453 - 2.465: 98.2568% ( 13) 00:15:15.145 2.465 - 2.477: 98.3227% ( 9) 00:15:15.145 2.477 - 2.489: 98.4033% ( 11) 00:15:15.145 2.489 - 2.501: 98.4399% ( 5) 00:15:15.145 2.501 - 2.513: 98.4912% ( 7) 00:15:15.145 2.524 - 2.536: 98.4985% ( 1) 00:15:15.145 2.536 - 2.548: 98.5278% ( 4) 00:15:15.145 2.560 - 2.572: 98.5351% ( 1) 00:15:15.145 2.607 - 2.619: 98.5498% ( 2) 00:15:15.145 2.619 - 2.631: 98.5571% ( 1) 00:15:15.145 2.643 - 2.655: 98.5644% ( 1) 00:15:15.145 2.690 - 2.702: 98.5717% ( 1) 00:15:15.145 2.726 - 2.738: 98.5864% ( 2) 00:15:15.145 2.773 - 2.785: 98.5937% ( 1) 00:15:15.145 3.390 - 3.413: 98.6084% ( 2) 00:15:15.145 3.461 - 3.484: 98.6303% ( 3) 00:15:15.145 3.484 - 3.508: 98.6377% ( 1) 00:15:15.145 3.508 - 3.532: 98.6523% ( 2) 00:15:15.145 3.532 - 3.556: 98.6596% ( 1) 00:15:15.145 3.579 - 3.603: 98.6743% ( 2) 00:15:15.145 3.603 - 3.627: 98.6816% ( 1) 00:15:15.145 3.650 - 3.674: 98.6889% ( 1) 00:15:15.145 3.674 - 3.698: 98.6963% ( 1) 00:15:15.145 3.745 - 3.769: 98.7109% ( 2) 00:15:15.145 3.816 - 3.840: 98.7256% ( 2) 00:15:15.145 3.864 - 3.887: 98.7402% ( 2) 00:15:15.145 3.911 - 3.935: 98.7475% ( 1) 00:15:15.145 3.935 - 3.959: 98.7549% ( 1) 00:15:15.145 3.959 - 3.982: 98.7622% ( 1) 00:15:15.145 4.788 - 4.812: 98.7695% ( 1) 00:15:15.145 4.930 - 4.954: 98.7768% ( 1) 00:15:15.145 5.144 - 5.167: 9[2024-07-20 18:45:25.377628] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:15.145 8.7842% ( 1) 00:15:15.145 5.167 - 5.191: 98.7915% ( 1) 00:15:15.145 5.262 - 5.286: 98.7988% ( 1) 00:15:15.145 5.950 - 5.973: 98.8061% ( 1) 00:15:15.145 5.973 - 5.997: 98.8134% ( 1) 00:15:15.145 6.068 - 6.116: 98.8208% ( 1) 00:15:15.145 6.447 - 6.495: 98.8354% ( 2) 00:15:15.145 6.827 - 6.874: 98.8427% ( 1) 00:15:15.145 7.111 - 7.159: 98.8501% ( 1) 00:15:15.145 7.159 - 7.206: 98.8574% ( 1) 00:15:15.145 7.538 - 7.585: 98.8647% ( 1) 00:15:15.145 11.520 - 11.567: 98.8720% ( 1) 00:15:15.145 11.662 - 11.710: 98.8794% ( 1) 00:15:15.145 15.550 - 15.644: 98.8940% ( 2) 00:15:15.145 15.644 - 15.739: 98.9160% ( 3) 00:15:15.145 15.739 - 15.834: 98.9306% ( 2) 00:15:15.145 15.834 - 15.929: 98.9599% ( 4) 00:15:15.145 15.929 - 16.024: 98.9746% ( 2) 00:15:15.145 16.024 - 16.119: 98.9966% ( 3) 00:15:15.145 16.119 - 16.213: 99.0332% ( 5) 00:15:15.145 16.213 - 16.308: 99.0478% ( 2) 00:15:15.145 16.308 - 16.403: 99.0845% ( 5) 00:15:15.145 16.403 - 16.498: 99.1357% ( 7) 00:15:15.145 16.498 - 16.593: 99.2236% ( 12) 00:15:15.145 16.593 - 16.687: 99.2456% ( 3) 00:15:15.145 16.687 - 16.782: 99.2529% ( 1) 00:15:15.145 16.782 - 16.877: 99.2822% ( 4) 00:15:15.145 16.877 - 16.972: 99.2969% ( 2) 00:15:15.145 16.972 - 17.067: 99.3262% ( 4) 00:15:15.145 17.067 - 17.161: 99.3555% ( 4) 00:15:15.145 17.161 - 17.256: 99.3628% ( 1) 00:15:15.145 17.351 - 17.446: 99.3701% ( 1) 00:15:15.145 17.446 - 17.541: 99.3774% ( 1) 00:15:15.145 17.541 - 17.636: 99.3848% ( 1) 00:15:15.145 17.636 - 17.730: 99.3994% ( 2) 00:15:15.145 17.730 - 17.825: 99.4067% ( 1) 00:15:15.145 17.825 - 17.920: 99.4140% ( 1) 00:15:15.145 17.920 - 18.015: 99.4214% ( 1) 00:15:15.145 18.015 - 18.110: 99.4287% ( 1) 00:15:15.145 18.204 - 18.299: 99.4360% ( 1) 00:15:15.145 18.679 - 18.773: 99.4433% ( 1) 00:15:15.145 21.523 - 21.618: 99.4507% ( 1) 00:15:15.145 2002.489 - 2014.625: 99.4580% ( 1) 00:15:15.145 3155.437 - 3179.710: 99.4653% ( 1) 00:15:15.145 3980.705 - 4004.978: 99.8682% ( 55) 00:15:15.145 4004.978 - 4029.250: 99.9854% ( 16) 00:15:15.145 5995.330 - 6019.603: 100.0000% ( 2) 00:15:15.145 00:15:15.145 18:45:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:15.145 18:45:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:15.145 18:45:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:15.145 18:45:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:15.145 18:45:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:15.402 [ 00:15:15.402 { 00:15:15.402 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:15.402 "subtype": "Discovery", 00:15:15.402 "listen_addresses": [], 00:15:15.402 "allow_any_host": true, 00:15:15.402 "hosts": [] 00:15:15.402 }, 00:15:15.402 { 00:15:15.402 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:15.402 "subtype": "NVMe", 00:15:15.402 "listen_addresses": [ 00:15:15.402 { 00:15:15.402 "trtype": "VFIOUSER", 00:15:15.402 "adrfam": "IPv4", 00:15:15.403 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:15.403 "trsvcid": "0" 00:15:15.403 } 00:15:15.403 ], 00:15:15.403 "allow_any_host": true, 00:15:15.403 "hosts": [], 00:15:15.403 "serial_number": "SPDK1", 00:15:15.403 "model_number": "SPDK bdev Controller", 00:15:15.403 "max_namespaces": 32, 00:15:15.403 "min_cntlid": 1, 00:15:15.403 "max_cntlid": 65519, 00:15:15.403 "namespaces": [ 00:15:15.403 { 00:15:15.403 "nsid": 1, 00:15:15.403 "bdev_name": "Malloc1", 00:15:15.403 "name": "Malloc1", 00:15:15.403 "nguid": "41A8A81CB4E64E9294EFD9889C49C0D5", 00:15:15.403 "uuid": "41a8a81c-b4e6-4e92-94ef-d9889c49c0d5" 00:15:15.403 }, 00:15:15.403 { 00:15:15.403 "nsid": 2, 00:15:15.403 "bdev_name": "Malloc3", 00:15:15.403 "name": "Malloc3", 00:15:15.403 "nguid": "00460E954F5546C59ADC25B2BE818C1E", 00:15:15.403 "uuid": "00460e95-4f55-46c5-9adc-25b2be818c1e" 00:15:15.403 } 00:15:15.403 ] 00:15:15.403 }, 00:15:15.403 { 00:15:15.403 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:15.403 "subtype": "NVMe", 00:15:15.403 "listen_addresses": [ 00:15:15.403 { 00:15:15.403 "trtype": "VFIOUSER", 00:15:15.403 "adrfam": "IPv4", 00:15:15.403 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:15.403 "trsvcid": "0" 00:15:15.403 } 00:15:15.403 ], 00:15:15.403 "allow_any_host": true, 00:15:15.403 "hosts": [], 00:15:15.403 "serial_number": "SPDK2", 00:15:15.403 "model_number": "SPDK bdev Controller", 00:15:15.403 "max_namespaces": 32, 00:15:15.403 "min_cntlid": 1, 00:15:15.403 "max_cntlid": 65519, 00:15:15.403 "namespaces": [ 00:15:15.403 { 00:15:15.403 "nsid": 1, 00:15:15.403 "bdev_name": "Malloc2", 00:15:15.403 "name": "Malloc2", 00:15:15.403 "nguid": "4A12207D59504636B5762D6181FC0419", 00:15:15.403 "uuid": "4a12207d-5950-4636-b576-2d6181fc0419" 00:15:15.403 } 00:15:15.403 ] 00:15:15.403 } 00:15:15.403 ] 00:15:15.403 18:45:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:15.403 18:45:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1353588 00:15:15.403 18:45:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:15.403 18:45:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:15.403 18:45:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:15.403 18:45:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:15.403 18:45:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:15.403 18:45:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:15.403 18:45:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:15.403 18:45:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:15.660 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.660 [2024-07-20 18:45:25.829488] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:15.660 Malloc4 00:15:15.660 18:45:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:15.918 [2024-07-20 18:45:26.202363] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:15.918 18:45:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:16.174 Asynchronous Event Request test 00:15:16.174 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:16.174 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:16.174 Registering asynchronous event callbacks... 00:15:16.174 Starting namespace attribute notice tests for all controllers... 00:15:16.174 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:16.174 aer_cb - Changed Namespace 00:15:16.174 Cleaning up... 00:15:16.174 [ 00:15:16.174 { 00:15:16.174 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:16.174 "subtype": "Discovery", 00:15:16.174 "listen_addresses": [], 00:15:16.174 "allow_any_host": true, 00:15:16.174 "hosts": [] 00:15:16.174 }, 00:15:16.174 { 00:15:16.174 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:16.174 "subtype": "NVMe", 00:15:16.174 "listen_addresses": [ 00:15:16.174 { 00:15:16.174 "trtype": "VFIOUSER", 00:15:16.174 "adrfam": "IPv4", 00:15:16.174 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:16.174 "trsvcid": "0" 00:15:16.174 } 00:15:16.174 ], 00:15:16.174 "allow_any_host": true, 00:15:16.174 "hosts": [], 00:15:16.174 "serial_number": "SPDK1", 00:15:16.174 "model_number": "SPDK bdev Controller", 00:15:16.174 "max_namespaces": 32, 00:15:16.174 "min_cntlid": 1, 00:15:16.174 "max_cntlid": 65519, 00:15:16.174 "namespaces": [ 00:15:16.174 { 00:15:16.174 "nsid": 1, 00:15:16.174 "bdev_name": "Malloc1", 00:15:16.174 "name": "Malloc1", 00:15:16.174 "nguid": "41A8A81CB4E64E9294EFD9889C49C0D5", 00:15:16.174 "uuid": "41a8a81c-b4e6-4e92-94ef-d9889c49c0d5" 00:15:16.174 }, 00:15:16.174 { 00:15:16.174 "nsid": 2, 00:15:16.174 "bdev_name": "Malloc3", 00:15:16.174 "name": "Malloc3", 00:15:16.174 "nguid": "00460E954F5546C59ADC25B2BE818C1E", 00:15:16.174 "uuid": "00460e95-4f55-46c5-9adc-25b2be818c1e" 00:15:16.174 } 00:15:16.174 ] 00:15:16.174 }, 00:15:16.174 { 00:15:16.174 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:16.174 "subtype": "NVMe", 00:15:16.174 "listen_addresses": [ 00:15:16.174 { 00:15:16.174 "trtype": "VFIOUSER", 00:15:16.174 "adrfam": "IPv4", 00:15:16.174 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:16.174 "trsvcid": "0" 00:15:16.174 } 00:15:16.174 ], 00:15:16.174 "allow_any_host": true, 00:15:16.174 "hosts": [], 00:15:16.174 "serial_number": "SPDK2", 00:15:16.174 "model_number": "SPDK bdev Controller", 00:15:16.174 "max_namespaces": 32, 00:15:16.174 "min_cntlid": 1, 00:15:16.174 "max_cntlid": 65519, 00:15:16.174 "namespaces": [ 00:15:16.174 { 00:15:16.174 "nsid": 1, 00:15:16.174 "bdev_name": "Malloc2", 00:15:16.174 "name": "Malloc2", 00:15:16.174 "nguid": "4A12207D59504636B5762D6181FC0419", 00:15:16.174 "uuid": "4a12207d-5950-4636-b576-2d6181fc0419" 00:15:16.174 }, 00:15:16.174 { 00:15:16.174 "nsid": 2, 00:15:16.174 "bdev_name": "Malloc4", 00:15:16.174 "name": "Malloc4", 00:15:16.174 "nguid": "B65576F1034047129CA8F63449080740", 00:15:16.174 "uuid": "b65576f1-0340-4712-9ca8-f63449080740" 00:15:16.174 } 00:15:16.174 ] 00:15:16.174 } 00:15:16.174 ] 00:15:16.175 18:45:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1353588 00:15:16.175 18:45:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:16.175 18:45:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1347380 00:15:16.175 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 1347380 ']' 00:15:16.175 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 1347380 00:15:16.175 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:16.175 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:16.175 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1347380 00:15:16.175 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:16.175 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:16.175 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1347380' 00:15:16.175 killing process with pid 1347380 00:15:16.175 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 1347380 00:15:16.431 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 1347380 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1353728 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1353728' 00:15:16.688 Process pid: 1353728 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1353728 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 1353728 ']' 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:16.688 18:45:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:16.688 [2024-07-20 18:45:26.885183] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:16.688 [2024-07-20 18:45:26.886339] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:16.688 [2024-07-20 18:45:26.886417] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.688 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.688 [2024-07-20 18:45:26.952223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:16.946 [2024-07-20 18:45:27.043095] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.946 [2024-07-20 18:45:27.043157] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.946 [2024-07-20 18:45:27.043184] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.946 [2024-07-20 18:45:27.043199] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.946 [2024-07-20 18:45:27.043211] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.946 [2024-07-20 18:45:27.043293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.946 [2024-07-20 18:45:27.043363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.946 [2024-07-20 18:45:27.043457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:16.946 [2024-07-20 18:45:27.043459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.946 [2024-07-20 18:45:27.149407] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:16.946 [2024-07-20 18:45:27.149598] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:16.946 [2024-07-20 18:45:27.149992] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:16.946 [2024-07-20 18:45:27.150546] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:16.946 [2024-07-20 18:45:27.150807] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:16.946 18:45:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:16.946 18:45:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:16.946 18:45:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:17.876 18:45:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:18.133 18:45:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:18.133 18:45:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:18.133 18:45:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:18.133 18:45:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:18.133 18:45:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:18.697 Malloc1 00:15:18.698 18:45:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:18.955 18:45:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:19.225 18:45:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:19.482 18:45:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:19.483 18:45:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:19.483 18:45:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:19.740 Malloc2 00:15:19.740 18:45:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:19.998 18:45:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:20.256 18:45:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:20.514 18:45:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:20.514 18:45:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1353728 00:15:20.514 18:45:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 1353728 ']' 00:15:20.514 18:45:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 1353728 00:15:20.514 18:45:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:20.514 18:45:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:20.514 18:45:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1353728 00:15:20.514 18:45:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:20.514 18:45:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:20.514 18:45:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1353728' 00:15:20.514 killing process with pid 1353728 00:15:20.514 18:45:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 1353728 00:15:20.515 18:45:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 1353728 00:15:20.774 18:45:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:20.774 18:45:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:20.774 00:15:20.774 real 0m52.612s 00:15:20.774 user 3m27.363s 00:15:20.774 sys 0m4.333s 00:15:20.774 18:45:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:20.774 18:45:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:20.774 ************************************ 00:15:20.774 END TEST nvmf_vfio_user 00:15:20.774 ************************************ 00:15:20.774 18:45:31 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:20.774 18:45:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:20.774 18:45:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:20.774 18:45:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:20.774 ************************************ 00:15:20.774 START TEST nvmf_vfio_user_nvme_compliance 00:15:20.774 ************************************ 00:15:20.774 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:20.774 * Looking for test storage... 00:15:20.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:20.774 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.774 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:20.774 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.774 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.774 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.774 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1354320 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1354320' 00:15:21.033 Process pid: 1354320 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1354320 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 1354320 ']' 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:21.033 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:21.033 [2024-07-20 18:45:31.158013] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:21.033 [2024-07-20 18:45:31.158105] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.033 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.033 [2024-07-20 18:45:31.216982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:21.033 [2024-07-20 18:45:31.304110] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.033 [2024-07-20 18:45:31.304192] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.033 [2024-07-20 18:45:31.304207] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.033 [2024-07-20 18:45:31.304219] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.033 [2024-07-20 18:45:31.304228] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.033 [2024-07-20 18:45:31.304317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.033 [2024-07-20 18:45:31.304447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.033 [2024-07-20 18:45:31.304449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.292 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:21.292 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:15:21.292 18:45:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.229 malloc0 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.229 18:45:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:22.229 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.489 00:15:22.490 00:15:22.490 CUnit - A unit testing framework for C - Version 2.1-3 00:15:22.490 http://cunit.sourceforge.net/ 00:15:22.490 00:15:22.490 00:15:22.490 Suite: nvme_compliance 00:15:22.490 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-20 18:45:32.640283] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.490 [2024-07-20 18:45:32.641703] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:22.490 [2024-07-20 18:45:32.641728] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:22.490 [2024-07-20 18:45:32.641740] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:22.490 [2024-07-20 18:45:32.643300] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.490 passed 00:15:22.490 Test: admin_identify_ctrlr_verify_fused ...[2024-07-20 18:45:32.727897] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.490 [2024-07-20 18:45:32.730914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.490 passed 00:15:22.748 Test: admin_identify_ns ...[2024-07-20 18:45:32.816359] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.748 [2024-07-20 18:45:32.876809] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:22.748 [2024-07-20 18:45:32.884823] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:22.748 [2024-07-20 18:45:32.905936] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.748 passed 00:15:22.748 Test: admin_get_features_mandatory_features ...[2024-07-20 18:45:32.990961] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.748 [2024-07-20 18:45:32.993982] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.748 passed 00:15:23.006 Test: admin_get_features_optional_features ...[2024-07-20 18:45:33.078526] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.006 [2024-07-20 18:45:33.081545] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.006 passed 00:15:23.006 Test: admin_set_features_number_of_queues ...[2024-07-20 18:45:33.162796] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.006 [2024-07-20 18:45:33.269919] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.006 passed 00:15:23.263 Test: admin_get_log_page_mandatory_logs ...[2024-07-20 18:45:33.353615] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.263 [2024-07-20 18:45:33.356636] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.263 passed 00:15:23.263 Test: admin_get_log_page_with_lpo ...[2024-07-20 18:45:33.438903] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.263 [2024-07-20 18:45:33.506810] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:23.263 [2024-07-20 18:45:33.519914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.263 passed 00:15:23.520 Test: fabric_property_get ...[2024-07-20 18:45:33.603280] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.520 [2024-07-20 18:45:33.604561] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:23.520 [2024-07-20 18:45:33.606302] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.520 passed 00:15:23.520 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-20 18:45:33.690871] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.520 [2024-07-20 18:45:33.692176] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:23.520 [2024-07-20 18:45:33.693891] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.520 passed 00:15:23.520 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-20 18:45:33.777073] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.777 [2024-07-20 18:45:33.860805] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:23.777 [2024-07-20 18:45:33.876803] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:23.777 [2024-07-20 18:45:33.881897] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.777 passed 00:15:23.777 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-20 18:45:33.967163] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.777 [2024-07-20 18:45:33.968456] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:23.777 [2024-07-20 18:45:33.970194] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.777 passed 00:15:23.777 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-20 18:45:34.055372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.034 [2024-07-20 18:45:34.128818] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:24.034 [2024-07-20 18:45:34.152802] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:24.034 [2024-07-20 18:45:34.157924] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.034 passed 00:15:24.034 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-20 18:45:34.244153] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.034 [2024-07-20 18:45:34.245486] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:24.034 [2024-07-20 18:45:34.245522] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:24.034 [2024-07-20 18:45:34.247174] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.034 passed 00:15:24.034 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-20 18:45:34.328320] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.291 [2024-07-20 18:45:34.420801] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:24.291 [2024-07-20 18:45:34.428804] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:24.291 [2024-07-20 18:45:34.436805] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:24.291 [2024-07-20 18:45:34.444799] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:24.291 [2024-07-20 18:45:34.473912] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.291 passed 00:15:24.291 Test: admin_create_io_sq_verify_pc ...[2024-07-20 18:45:34.557632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.291 [2024-07-20 18:45:34.570816] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:24.291 [2024-07-20 18:45:34.590979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.548 passed 00:15:24.548 Test: admin_create_io_qp_max_qps ...[2024-07-20 18:45:34.676570] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.479 [2024-07-20 18:45:35.767810] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:26.071 [2024-07-20 18:45:36.149862] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.071 passed 00:15:26.071 Test: admin_create_io_sq_shared_cq ...[2024-07-20 18:45:36.233257] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.071 [2024-07-20 18:45:36.364817] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:26.329 [2024-07-20 18:45:36.401907] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.329 passed 00:15:26.329 00:15:26.329 Run Summary: Type Total Ran Passed Failed Inactive 00:15:26.329 suites 1 1 n/a 0 0 00:15:26.329 tests 18 18 18 0 0 00:15:26.329 asserts 360 360 360 0 n/a 00:15:26.329 00:15:26.329 Elapsed time = 1.561 seconds 00:15:26.329 18:45:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1354320 00:15:26.329 18:45:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 1354320 ']' 00:15:26.329 18:45:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 1354320 00:15:26.329 18:45:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:15:26.329 18:45:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:26.329 18:45:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1354320 00:15:26.329 18:45:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:26.329 18:45:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:26.329 18:45:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1354320' 00:15:26.329 killing process with pid 1354320 00:15:26.329 18:45:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 1354320 00:15:26.329 18:45:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 1354320 00:15:26.587 18:45:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:26.587 18:45:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:26.587 00:15:26.587 real 0m5.707s 00:15:26.587 user 0m16.043s 00:15:26.587 sys 0m0.548s 00:15:26.587 18:45:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:26.587 18:45:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.587 ************************************ 00:15:26.587 END TEST nvmf_vfio_user_nvme_compliance 00:15:26.587 ************************************ 00:15:26.587 18:45:36 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:26.587 18:45:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:26.587 18:45:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:26.587 18:45:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:26.587 ************************************ 00:15:26.587 START TEST nvmf_vfio_user_fuzz 00:15:26.587 ************************************ 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:26.588 * Looking for test storage... 00:15:26.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1355059 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1355059' 00:15:26.588 Process pid: 1355059 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1355059 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 1355059 ']' 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:26.588 18:45:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:26.846 18:45:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:26.846 18:45:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:15:26.846 18:45:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:28.241 malloc0 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:28.241 18:45:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:00.315 Fuzzing completed. Shutting down the fuzz application 00:16:00.315 00:16:00.315 Dumping successful admin opcodes: 00:16:00.315 8, 9, 10, 24, 00:16:00.315 Dumping successful io opcodes: 00:16:00.315 0, 00:16:00.315 NS: 0x200003a1ef00 I/O qp, Total commands completed: 580581, total successful commands: 2231, random_seed: 613005696 00:16:00.315 NS: 0x200003a1ef00 admin qp, Total commands completed: 73912, total successful commands: 581, random_seed: 3044803264 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1355059 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 1355059 ']' 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 1355059 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1355059 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1355059' 00:16:00.315 killing process with pid 1355059 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 1355059 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 1355059 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:00.315 00:16:00.315 real 0m32.204s 00:16:00.315 user 0m31.525s 00:16:00.315 sys 0m29.711s 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:00.315 18:46:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.315 ************************************ 00:16:00.315 END TEST nvmf_vfio_user_fuzz 00:16:00.315 ************************************ 00:16:00.315 18:46:09 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:00.315 18:46:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:00.315 18:46:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:00.315 18:46:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:00.315 ************************************ 00:16:00.315 START TEST nvmf_host_management 00:16:00.315 ************************************ 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:00.315 * Looking for test storage... 00:16:00.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:00.315 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:00.316 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.316 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:00.316 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:00.316 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:00.316 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.316 18:46:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.316 18:46:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.316 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:00.316 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:00.316 18:46:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:00.316 18:46:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:00.883 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:00.883 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:00.883 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:00.883 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:00.884 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:00.884 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.141 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:01.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:16:01.141 00:16:01.141 --- 10.0.0.2 ping statistics --- 00:16:01.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.141 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:16:01.141 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:16:01.141 00:16:01.141 --- 10.0.0.1 ping statistics --- 00:16:01.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.142 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1360500 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1360500 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 1360500 ']' 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:01.142 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.142 [2024-07-20 18:46:11.288842] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:01.142 [2024-07-20 18:46:11.288914] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.142 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.142 [2024-07-20 18:46:11.354486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.142 [2024-07-20 18:46:11.441445] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.142 [2024-07-20 18:46:11.441497] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.142 [2024-07-20 18:46:11.441517] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.142 [2024-07-20 18:46:11.441528] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.142 [2024-07-20 18:46:11.441537] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.142 [2024-07-20 18:46:11.441646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.142 [2024-07-20 18:46:11.441705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.142 [2024-07-20 18:46:11.441768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:01.142 [2024-07-20 18:46:11.441771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.400 [2024-07-20 18:46:11.597669] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.400 Malloc0 00:16:01.400 [2024-07-20 18:46:11.658988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1360547 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1360547 /var/tmp/bdevperf.sock 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 1360547 ']' 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:01.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:01.400 { 00:16:01.400 "params": { 00:16:01.400 "name": "Nvme$subsystem", 00:16:01.400 "trtype": "$TEST_TRANSPORT", 00:16:01.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:01.400 "adrfam": "ipv4", 00:16:01.400 "trsvcid": "$NVMF_PORT", 00:16:01.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:01.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:01.400 "hdgst": ${hdgst:-false}, 00:16:01.400 "ddgst": ${ddgst:-false} 00:16:01.400 }, 00:16:01.400 "method": "bdev_nvme_attach_controller" 00:16:01.400 } 00:16:01.400 EOF 00:16:01.400 )") 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:01.400 18:46:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:01.400 "params": { 00:16:01.400 "name": "Nvme0", 00:16:01.400 "trtype": "tcp", 00:16:01.400 "traddr": "10.0.0.2", 00:16:01.400 "adrfam": "ipv4", 00:16:01.400 "trsvcid": "4420", 00:16:01.400 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:01.400 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:01.400 "hdgst": false, 00:16:01.400 "ddgst": false 00:16:01.400 }, 00:16:01.400 "method": "bdev_nvme_attach_controller" 00:16:01.400 }' 00:16:01.659 [2024-07-20 18:46:11.739580] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:01.659 [2024-07-20 18:46:11.739664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1360547 ] 00:16:01.659 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.659 [2024-07-20 18:46:11.802176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.659 [2024-07-20 18:46:11.888435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.916 Running I/O for 10 seconds... 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=3 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:16:01.916 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:02.173 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:02.173 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:02.173 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:02.173 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.173 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:02.173 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:02.173 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.173 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=322 00:16:02.173 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 322 -ge 100 ']' 00:16:02.173 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:02.173 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:02.173 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:02.173 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:02.173 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.173 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:02.432 [2024-07-20 18:46:12.501671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.501743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.501769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.501808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.501824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.501838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.501850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.501863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.501875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.501888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.501915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.501938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.501951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.501963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.501975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.501988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.432 [2024-07-20 18:46:12.502439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.502690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021120 is same with the state(5) to be set 00:16:02.433 [2024-07-20 18:46:12.503228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.503973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.503989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.504004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.504020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.504034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.504050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.504064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.504079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.504123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.504140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.504154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.504169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.504183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.504199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.504213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.504228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.504242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.504257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.504272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.504288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.504302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.504317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.504331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.504346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.504360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.433 [2024-07-20 18:46:12.504376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.433 [2024-07-20 18:46:12.504389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.504977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.504994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.505008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.505025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.505040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.505055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.505070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.505101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.505116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.505133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.505148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.505163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.505178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.505193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.505209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.505225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.505239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.505255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.505269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.505284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.505298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.505322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.434 [2024-07-20 18:46:12.505337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.434 [2024-07-20 18:46:12.505351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3b110 is same with the state(5) to be set 00:16:02.434 [2024-07-20 18:46:12.505429] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe3b110 was disconnected and freed. reset controller. 00:16:02.434 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.434 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:02.434 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.434 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:02.434 [2024-07-20 18:46:12.506638] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:02.434 task offset: 40960 on job bdev=Nvme0n1 fails 00:16:02.434 00:16:02.434 Latency(us) 00:16:02.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.434 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:02.434 Job: Nvme0n1 ended in about 0.39 seconds with error 00:16:02.434 Verification LBA range: start 0x0 length 0x400 00:16:02.434 Nvme0n1 : 0.39 817.19 51.07 163.44 0.00 63560.19 6553.60 53982.25 00:16:02.434 =================================================================================================================== 00:16:02.434 Total : 817.19 51.07 163.44 0.00 63560.19 6553.60 53982.25 00:16:02.434 [2024-07-20 18:46:12.508722] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:02.434 [2024-07-20 18:46:12.508752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2a1e0 (9): Bad file descriptor 00:16:02.434 18:46:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.434 18:46:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:02.434 [2024-07-20 18:46:12.560479] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:03.365 18:46:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1360547 00:16:03.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1360547) - No such process 00:16:03.365 18:46:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:03.365 18:46:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:03.365 18:46:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:03.365 18:46:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:03.365 18:46:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:03.365 18:46:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:03.365 18:46:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:03.365 18:46:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:03.365 { 00:16:03.365 "params": { 00:16:03.365 "name": "Nvme$subsystem", 00:16:03.365 "trtype": "$TEST_TRANSPORT", 00:16:03.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:03.365 "adrfam": "ipv4", 00:16:03.365 "trsvcid": "$NVMF_PORT", 00:16:03.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:03.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:03.365 "hdgst": ${hdgst:-false}, 00:16:03.365 "ddgst": ${ddgst:-false} 00:16:03.365 }, 00:16:03.365 "method": "bdev_nvme_attach_controller" 00:16:03.365 } 00:16:03.365 EOF 00:16:03.365 )") 00:16:03.365 18:46:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:03.365 18:46:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:03.365 18:46:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:03.365 18:46:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:03.365 "params": { 00:16:03.365 "name": "Nvme0", 00:16:03.365 "trtype": "tcp", 00:16:03.365 "traddr": "10.0.0.2", 00:16:03.365 "adrfam": "ipv4", 00:16:03.365 "trsvcid": "4420", 00:16:03.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:03.365 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:03.365 "hdgst": false, 00:16:03.365 "ddgst": false 00:16:03.365 }, 00:16:03.365 "method": "bdev_nvme_attach_controller" 00:16:03.365 }' 00:16:03.365 [2024-07-20 18:46:13.562190] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:03.365 [2024-07-20 18:46:13.562278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1360818 ] 00:16:03.365 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.366 [2024-07-20 18:46:13.622852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.623 [2024-07-20 18:46:13.710353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.880 Running I/O for 1 seconds... 00:16:05.254 00:16:05.254 Latency(us) 00:16:05.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.254 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:05.254 Verification LBA range: start 0x0 length 0x400 00:16:05.254 Nvme0n1 : 1.10 870.65 54.42 0.00 0.00 70014.95 19612.25 61749.48 00:16:05.254 =================================================================================================================== 00:16:05.254 Total : 870.65 54.42 0.00 0.00 70014.95 19612.25 61749.48 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:05.254 rmmod nvme_tcp 00:16:05.254 rmmod nvme_fabrics 00:16:05.254 rmmod nvme_keyring 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1360500 ']' 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1360500 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 1360500 ']' 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 1360500 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1360500 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1360500' 00:16:05.254 killing process with pid 1360500 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 1360500 00:16:05.254 18:46:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 1360500 00:16:05.512 [2024-07-20 18:46:15.698817] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:05.512 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:05.512 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:05.512 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:05.512 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.512 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:05.512 18:46:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.513 18:46:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.513 18:46:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.474 18:46:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:07.474 18:46:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:07.474 00:16:07.474 real 0m8.728s 00:16:07.474 user 0m20.114s 00:16:07.474 sys 0m2.677s 00:16:07.474 18:46:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:07.474 18:46:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.474 ************************************ 00:16:07.474 END TEST nvmf_host_management 00:16:07.474 ************************************ 00:16:07.752 18:46:17 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:07.752 18:46:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:07.752 18:46:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:07.752 18:46:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:07.752 ************************************ 00:16:07.752 START TEST nvmf_lvol 00:16:07.752 ************************************ 00:16:07.752 18:46:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:07.752 * Looking for test storage... 00:16:07.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.752 18:46:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.752 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:07.752 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.752 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.752 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.752 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.752 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.752 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.752 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:07.753 18:46:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.650 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:09.651 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:09.651 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:09.651 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:09.651 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:09.651 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:09.909 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:09.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:16:09.909 00:16:09.909 --- 10.0.0.2 ping statistics --- 00:16:09.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.909 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:16:09.909 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:09.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:16:09.909 00:16:09.909 --- 10.0.0.1 ping statistics --- 00:16:09.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.909 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:16:09.909 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.909 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:09.909 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:09.909 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.909 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:09.909 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:09.909 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.909 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:09.909 18:46:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:09.909 18:46:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:09.909 18:46:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:09.909 18:46:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:09.909 18:46:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:09.909 18:46:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1363024 00:16:09.909 18:46:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:09.909 18:46:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1363024 00:16:09.909 18:46:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 1363024 ']' 00:16:09.909 18:46:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.909 18:46:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:09.909 18:46:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.909 18:46:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:09.909 18:46:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:09.909 [2024-07-20 18:46:20.058335] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:09.909 [2024-07-20 18:46:20.058420] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.909 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.909 [2024-07-20 18:46:20.128608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:09.909 [2024-07-20 18:46:20.222151] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.909 [2024-07-20 18:46:20.222204] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.909 [2024-07-20 18:46:20.222225] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.909 [2024-07-20 18:46:20.222238] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.909 [2024-07-20 18:46:20.222247] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.909 [2024-07-20 18:46:20.222339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.909 [2024-07-20 18:46:20.222407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.909 [2024-07-20 18:46:20.222410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.197 18:46:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:10.197 18:46:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:10.197 18:46:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:10.197 18:46:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:10.197 18:46:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:10.197 18:46:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.197 18:46:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:10.455 [2024-07-20 18:46:20.574190] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.455 18:46:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:10.712 18:46:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:10.712 18:46:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:10.969 18:46:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:10.969 18:46:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:11.225 18:46:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:11.483 18:46:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9f136c41-1145-4365-b3e5-7a4f6631c976 00:16:11.483 18:46:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9f136c41-1145-4365-b3e5-7a4f6631c976 lvol 20 00:16:11.741 18:46:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9077fe1f-0c25-4668-9d57-30f465a2efe5 00:16:11.741 18:46:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:11.999 18:46:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9077fe1f-0c25-4668-9d57-30f465a2efe5 00:16:12.257 18:46:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:12.515 [2024-07-20 18:46:22.603114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.515 18:46:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:12.773 18:46:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1363328 00:16:12.773 18:46:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:12.773 18:46:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:12.773 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.704 18:46:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9077fe1f-0c25-4668-9d57-30f465a2efe5 MY_SNAPSHOT 00:16:13.961 18:46:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9404a25d-c92e-4d63-be51-92ac20967149 00:16:13.961 18:46:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9077fe1f-0c25-4668-9d57-30f465a2efe5 30 00:16:14.218 18:46:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9404a25d-c92e-4d63-be51-92ac20967149 MY_CLONE 00:16:14.475 18:46:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1a70994a-4e77-487f-aaf7-21c574ef9452 00:16:14.475 18:46:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1a70994a-4e77-487f-aaf7-21c574ef9452 00:16:15.039 18:46:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1363328 00:16:23.137 Initializing NVMe Controllers 00:16:23.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:23.137 Controller IO queue size 128, less than required. 00:16:23.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:23.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:23.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:23.137 Initialization complete. Launching workers. 00:16:23.137 ======================================================== 00:16:23.137 Latency(us) 00:16:23.137 Device Information : IOPS MiB/s Average min max 00:16:23.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11142.40 43.52 11491.89 2141.74 71331.55 00:16:23.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11108.80 43.39 11529.61 2085.24 75196.59 00:16:23.137 ======================================================== 00:16:23.137 Total : 22251.20 86.92 11510.72 2085.24 75196.59 00:16:23.137 00:16:23.137 18:46:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:23.394 18:46:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9077fe1f-0c25-4668-9d57-30f465a2efe5 00:16:23.652 18:46:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9f136c41-1145-4365-b3e5-7a4f6631c976 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:23.909 rmmod nvme_tcp 00:16:23.909 rmmod nvme_fabrics 00:16:23.909 rmmod nvme_keyring 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1363024 ']' 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1363024 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 1363024 ']' 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 1363024 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1363024 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1363024' 00:16:23.909 killing process with pid 1363024 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 1363024 00:16:23.909 18:46:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 1363024 00:16:24.167 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:24.167 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:24.167 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:24.167 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:24.167 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:24.167 18:46:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.167 18:46:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.167 18:46:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:26.698 00:16:26.698 real 0m18.588s 00:16:26.698 user 1m2.988s 00:16:26.698 sys 0m5.777s 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:26.698 ************************************ 00:16:26.698 END TEST nvmf_lvol 00:16:26.698 ************************************ 00:16:26.698 18:46:36 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:26.698 18:46:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:26.698 18:46:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:26.698 18:46:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:26.698 ************************************ 00:16:26.698 START TEST nvmf_lvs_grow 00:16:26.698 ************************************ 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:26.698 * Looking for test storage... 00:16:26.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.698 18:46:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:26.699 18:46:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:28.141 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:28.141 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.141 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:28.142 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.142 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:28.142 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:28.142 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.142 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:28.142 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:28.142 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.142 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:28.142 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.142 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:28.142 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.142 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:28.142 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:28.142 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:28.400 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.400 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:28.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:16:28.401 00:16:28.401 --- 10.0.0.2 ping statistics --- 00:16:28.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.401 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:28.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:16:28.401 00:16:28.401 --- 10.0.0.1 ping statistics --- 00:16:28.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.401 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1366586 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1366586 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 1366586 ']' 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:28.401 18:46:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:28.401 [2024-07-20 18:46:38.671705] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:28.401 [2024-07-20 18:46:38.671805] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.401 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.659 [2024-07-20 18:46:38.736840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.659 [2024-07-20 18:46:38.820906] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.659 [2024-07-20 18:46:38.820958] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.659 [2024-07-20 18:46:38.820978] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.659 [2024-07-20 18:46:38.820989] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.659 [2024-07-20 18:46:38.820999] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.659 [2024-07-20 18:46:38.821025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.659 18:46:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:28.659 18:46:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:28.659 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:28.659 18:46:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:28.659 18:46:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:28.659 18:46:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.659 18:46:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:28.918 [2024-07-20 18:46:39.182956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.918 18:46:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:28.918 18:46:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:28.918 18:46:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:28.918 18:46:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:28.918 ************************************ 00:16:28.918 START TEST lvs_grow_clean 00:16:28.918 ************************************ 00:16:28.918 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:28.918 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:28.918 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:28.918 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:28.918 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:28.918 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:28.918 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:28.918 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:28.918 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:28.918 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:29.175 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:29.175 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:29.434 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=683da9c1-ebd2-43ca-9897-cc6753bc069a 00:16:29.434 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 683da9c1-ebd2-43ca-9897-cc6753bc069a 00:16:29.434 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:29.692 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:29.692 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:29.692 18:46:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 683da9c1-ebd2-43ca-9897-cc6753bc069a lvol 150 00:16:29.951 18:46:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5ec2618c-fddf-4703-8cf7-3b9b2b76bb72 00:16:29.951 18:46:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:29.951 18:46:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:30.209 [2024-07-20 18:46:40.481020] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:30.209 [2024-07-20 18:46:40.481112] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:30.209 true 00:16:30.209 18:46:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 683da9c1-ebd2-43ca-9897-cc6753bc069a 00:16:30.209 18:46:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:30.468 18:46:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:30.468 18:46:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:30.726 18:46:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5ec2618c-fddf-4703-8cf7-3b9b2b76bb72 00:16:30.985 18:46:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:31.242 [2024-07-20 18:46:41.468141] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.242 18:46:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:31.501 18:46:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1367020 00:16:31.501 18:46:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:31.501 18:46:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1367020 /var/tmp/bdevperf.sock 00:16:31.501 18:46:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 1367020 ']' 00:16:31.501 18:46:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:31.501 18:46:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.501 18:46:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:31.501 18:46:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.501 18:46:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:31.501 18:46:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:31.501 [2024-07-20 18:46:41.769397] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:31.501 [2024-07-20 18:46:41.769486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1367020 ] 00:16:31.501 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.759 [2024-07-20 18:46:41.829395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.759 [2024-07-20 18:46:41.914480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.759 18:46:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:31.759 18:46:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:16:31.759 18:46:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:32.325 Nvme0n1 00:16:32.325 18:46:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:32.583 [ 00:16:32.583 { 00:16:32.583 "name": "Nvme0n1", 00:16:32.583 "aliases": [ 00:16:32.583 "5ec2618c-fddf-4703-8cf7-3b9b2b76bb72" 00:16:32.583 ], 00:16:32.583 "product_name": "NVMe disk", 00:16:32.583 "block_size": 4096, 00:16:32.583 "num_blocks": 38912, 00:16:32.583 "uuid": "5ec2618c-fddf-4703-8cf7-3b9b2b76bb72", 00:16:32.583 "assigned_rate_limits": { 00:16:32.583 "rw_ios_per_sec": 0, 00:16:32.583 "rw_mbytes_per_sec": 0, 00:16:32.583 "r_mbytes_per_sec": 0, 00:16:32.583 "w_mbytes_per_sec": 0 00:16:32.583 }, 00:16:32.583 "claimed": false, 00:16:32.583 "zoned": false, 00:16:32.583 "supported_io_types": { 00:16:32.583 "read": true, 00:16:32.583 "write": true, 00:16:32.583 "unmap": true, 00:16:32.583 "write_zeroes": true, 00:16:32.583 "flush": true, 00:16:32.583 "reset": true, 00:16:32.583 "compare": true, 00:16:32.583 "compare_and_write": true, 00:16:32.583 "abort": true, 00:16:32.583 "nvme_admin": true, 00:16:32.583 "nvme_io": true 00:16:32.583 }, 00:16:32.583 "memory_domains": [ 00:16:32.583 { 00:16:32.583 "dma_device_id": "system", 00:16:32.583 "dma_device_type": 1 00:16:32.583 } 00:16:32.583 ], 00:16:32.583 "driver_specific": { 00:16:32.583 "nvme": [ 00:16:32.583 { 00:16:32.583 "trid": { 00:16:32.583 "trtype": "TCP", 00:16:32.583 "adrfam": "IPv4", 00:16:32.583 "traddr": "10.0.0.2", 00:16:32.583 "trsvcid": "4420", 00:16:32.583 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:32.583 }, 00:16:32.583 "ctrlr_data": { 00:16:32.583 "cntlid": 1, 00:16:32.583 "vendor_id": "0x8086", 00:16:32.583 "model_number": "SPDK bdev Controller", 00:16:32.583 "serial_number": "SPDK0", 00:16:32.583 "firmware_revision": "24.05.1", 00:16:32.583 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:32.583 "oacs": { 00:16:32.583 "security": 0, 00:16:32.583 "format": 0, 00:16:32.583 "firmware": 0, 00:16:32.583 "ns_manage": 0 00:16:32.583 }, 00:16:32.583 "multi_ctrlr": true, 00:16:32.583 "ana_reporting": false 00:16:32.583 }, 00:16:32.583 "vs": { 00:16:32.583 "nvme_version": "1.3" 00:16:32.583 }, 00:16:32.583 "ns_data": { 00:16:32.583 "id": 1, 00:16:32.583 "can_share": true 00:16:32.583 } 00:16:32.583 } 00:16:32.584 ], 00:16:32.584 "mp_policy": "active_passive" 00:16:32.584 } 00:16:32.584 } 00:16:32.584 ] 00:16:32.584 18:46:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1367152 00:16:32.584 18:46:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:32.584 18:46:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:32.841 Running I/O for 10 seconds... 00:16:33.772 Latency(us) 00:16:33.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:33.772 Nvme0n1 : 1.00 14371.00 56.14 0.00 0.00 0.00 0.00 0.00 00:16:33.772 =================================================================================================================== 00:16:33.772 Total : 14371.00 56.14 0.00 0.00 0.00 0.00 0.00 00:16:33.772 00:16:34.704 18:46:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 683da9c1-ebd2-43ca-9897-cc6753bc069a 00:16:34.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.705 Nvme0n1 : 2.00 14434.50 56.38 0.00 0.00 0.00 0.00 0.00 00:16:34.705 =================================================================================================================== 00:16:34.705 Total : 14434.50 56.38 0.00 0.00 0.00 0.00 0.00 00:16:34.705 00:16:34.962 true 00:16:34.962 18:46:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 683da9c1-ebd2-43ca-9897-cc6753bc069a 00:16:34.962 18:46:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:35.220 18:46:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:35.220 18:46:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:35.220 18:46:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1367152 00:16:35.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.784 Nvme0n1 : 3.00 14576.67 56.94 0.00 0.00 0.00 0.00 0.00 00:16:35.784 =================================================================================================================== 00:16:35.784 Total : 14576.67 56.94 0.00 0.00 0.00 0.00 0.00 00:16:35.784 00:16:36.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.716 Nvme0n1 : 4.00 14724.50 57.52 0.00 0.00 0.00 0.00 0.00 00:16:36.716 =================================================================================================================== 00:16:36.716 Total : 14724.50 57.52 0.00 0.00 0.00 0.00 0.00 00:16:36.716 00:16:37.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.652 Nvme0n1 : 5.00 14723.60 57.51 0.00 0.00 0.00 0.00 0.00 00:16:37.652 =================================================================================================================== 00:16:37.652 Total : 14723.60 57.51 0.00 0.00 0.00 0.00 0.00 00:16:37.652 00:16:39.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.022 Nvme0n1 : 6.00 14723.00 57.51 0.00 0.00 0.00 0.00 0.00 00:16:39.022 =================================================================================================================== 00:16:39.022 Total : 14723.00 57.51 0.00 0.00 0.00 0.00 0.00 00:16:39.022 00:16:39.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.956 Nvme0n1 : 7.00 14784.71 57.75 0.00 0.00 0.00 0.00 0.00 00:16:39.956 =================================================================================================================== 00:16:39.956 Total : 14784.71 57.75 0.00 0.00 0.00 0.00 0.00 00:16:39.956 00:16:40.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.890 Nvme0n1 : 8.00 14802.25 57.82 0.00 0.00 0.00 0.00 0.00 00:16:40.890 =================================================================================================================== 00:16:40.890 Total : 14802.25 57.82 0.00 0.00 0.00 0.00 0.00 00:16:40.890 00:16:41.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.859 Nvme0n1 : 9.00 14795.00 57.79 0.00 0.00 0.00 0.00 0.00 00:16:41.859 =================================================================================================================== 00:16:41.859 Total : 14795.00 57.79 0.00 0.00 0.00 0.00 0.00 00:16:41.859 00:16:42.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.812 Nvme0n1 : 10.00 14811.40 57.86 0.00 0.00 0.00 0.00 0.00 00:16:42.812 =================================================================================================================== 00:16:42.812 Total : 14811.40 57.86 0.00 0.00 0.00 0.00 0.00 00:16:42.812 00:16:42.812 00:16:42.812 Latency(us) 00:16:42.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.812 Nvme0n1 : 10.00 14810.99 57.86 0.00 0.00 8635.99 5534.15 22136.60 00:16:42.812 =================================================================================================================== 00:16:42.812 Total : 14810.99 57.86 0.00 0.00 8635.99 5534.15 22136.60 00:16:42.812 0 00:16:42.812 18:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1367020 00:16:42.812 18:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 1367020 ']' 00:16:42.812 18:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 1367020 00:16:42.812 18:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:16:42.812 18:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:42.812 18:46:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1367020 00:16:42.812 18:46:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:42.812 18:46:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:42.812 18:46:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1367020' 00:16:42.812 killing process with pid 1367020 00:16:42.812 18:46:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 1367020 00:16:42.812 Received shutdown signal, test time was about 10.000000 seconds 00:16:42.812 00:16:42.812 Latency(us) 00:16:42.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.812 =================================================================================================================== 00:16:42.812 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.812 18:46:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 1367020 00:16:43.069 18:46:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:43.326 18:46:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:43.583 18:46:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 683da9c1-ebd2-43ca-9897-cc6753bc069a 00:16:43.583 18:46:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:43.840 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:43.840 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:43.840 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:44.097 [2024-07-20 18:46:54.400763] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:44.354 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 683da9c1-ebd2-43ca-9897-cc6753bc069a 00:16:44.354 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:16:44.354 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 683da9c1-ebd2-43ca-9897-cc6753bc069a 00:16:44.354 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.354 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.354 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.354 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.354 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.354 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.354 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.354 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:44.354 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 683da9c1-ebd2-43ca-9897-cc6753bc069a 00:16:44.354 request: 00:16:44.354 { 00:16:44.354 "uuid": "683da9c1-ebd2-43ca-9897-cc6753bc069a", 00:16:44.354 "method": "bdev_lvol_get_lvstores", 00:16:44.354 "req_id": 1 00:16:44.354 } 00:16:44.354 Got JSON-RPC error response 00:16:44.354 response: 00:16:44.354 { 00:16:44.354 "code": -19, 00:16:44.354 "message": "No such device" 00:16:44.354 } 00:16:44.611 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:16:44.611 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:44.611 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:44.611 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:44.611 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:44.611 aio_bdev 00:16:44.611 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5ec2618c-fddf-4703-8cf7-3b9b2b76bb72 00:16:44.611 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=5ec2618c-fddf-4703-8cf7-3b9b2b76bb72 00:16:44.611 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:44.611 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:16:44.611 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:44.611 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:44.611 18:46:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:45.175 18:46:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5ec2618c-fddf-4703-8cf7-3b9b2b76bb72 -t 2000 00:16:45.175 [ 00:16:45.175 { 00:16:45.175 "name": "5ec2618c-fddf-4703-8cf7-3b9b2b76bb72", 00:16:45.175 "aliases": [ 00:16:45.175 "lvs/lvol" 00:16:45.175 ], 00:16:45.175 "product_name": "Logical Volume", 00:16:45.175 "block_size": 4096, 00:16:45.175 "num_blocks": 38912, 00:16:45.175 "uuid": "5ec2618c-fddf-4703-8cf7-3b9b2b76bb72", 00:16:45.175 "assigned_rate_limits": { 00:16:45.175 "rw_ios_per_sec": 0, 00:16:45.175 "rw_mbytes_per_sec": 0, 00:16:45.175 "r_mbytes_per_sec": 0, 00:16:45.175 "w_mbytes_per_sec": 0 00:16:45.175 }, 00:16:45.175 "claimed": false, 00:16:45.175 "zoned": false, 00:16:45.175 "supported_io_types": { 00:16:45.175 "read": true, 00:16:45.175 "write": true, 00:16:45.175 "unmap": true, 00:16:45.175 "write_zeroes": true, 00:16:45.175 "flush": false, 00:16:45.175 "reset": true, 00:16:45.175 "compare": false, 00:16:45.175 "compare_and_write": false, 00:16:45.175 "abort": false, 00:16:45.175 "nvme_admin": false, 00:16:45.175 "nvme_io": false 00:16:45.175 }, 00:16:45.175 "driver_specific": { 00:16:45.175 "lvol": { 00:16:45.175 "lvol_store_uuid": "683da9c1-ebd2-43ca-9897-cc6753bc069a", 00:16:45.175 "base_bdev": "aio_bdev", 00:16:45.175 "thin_provision": false, 00:16:45.175 "num_allocated_clusters": 38, 00:16:45.175 "snapshot": false, 00:16:45.175 "clone": false, 00:16:45.175 "esnap_clone": false 00:16:45.175 } 00:16:45.175 } 00:16:45.175 } 00:16:45.175 ] 00:16:45.175 18:46:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:16:45.175 18:46:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 683da9c1-ebd2-43ca-9897-cc6753bc069a 00:16:45.175 18:46:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:45.431 18:46:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:45.432 18:46:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 683da9c1-ebd2-43ca-9897-cc6753bc069a 00:16:45.432 18:46:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:45.689 18:46:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:45.689 18:46:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5ec2618c-fddf-4703-8cf7-3b9b2b76bb72 00:16:45.946 18:46:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 683da9c1-ebd2-43ca-9897-cc6753bc069a 00:16:46.203 18:46:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:46.460 18:46:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:46.460 00:16:46.460 real 0m17.536s 00:16:46.460 user 0m16.969s 00:16:46.460 sys 0m1.882s 00:16:46.460 18:46:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:46.460 18:46:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:46.460 ************************************ 00:16:46.460 END TEST lvs_grow_clean 00:16:46.460 ************************************ 00:16:46.717 18:46:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:46.717 18:46:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:46.717 18:46:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:46.717 18:46:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:46.717 ************************************ 00:16:46.717 START TEST lvs_grow_dirty 00:16:46.717 ************************************ 00:16:46.717 18:46:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:16:46.717 18:46:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:46.717 18:46:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:46.717 18:46:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:46.717 18:46:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:46.717 18:46:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:46.717 18:46:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:46.717 18:46:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:46.717 18:46:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:46.717 18:46:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:46.975 18:46:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:46.975 18:46:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:47.232 18:46:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=51c0cabc-f213-45a1-b034-5a05c31f904d 00:16:47.232 18:46:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51c0cabc-f213-45a1-b034-5a05c31f904d 00:16:47.232 18:46:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:47.490 18:46:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:47.490 18:46:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:47.490 18:46:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 51c0cabc-f213-45a1-b034-5a05c31f904d lvol 150 00:16:47.748 18:46:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6e85ceab-21b2-44a6-bd7b-56cd7504b428 00:16:47.748 18:46:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:47.748 18:46:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:48.007 [2024-07-20 18:46:58.236373] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:48.007 [2024-07-20 18:46:58.236463] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:48.007 true 00:16:48.007 18:46:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51c0cabc-f213-45a1-b034-5a05c31f904d 00:16:48.007 18:46:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:48.264 18:46:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:48.264 18:46:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:48.522 18:46:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6e85ceab-21b2-44a6-bd7b-56cd7504b428 00:16:48.780 18:46:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:49.039 [2024-07-20 18:46:59.299546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.039 18:46:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:49.298 18:46:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1369189 00:16:49.298 18:46:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:49.298 18:46:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:49.298 18:46:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1369189 /var/tmp/bdevperf.sock 00:16:49.298 18:46:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 1369189 ']' 00:16:49.299 18:46:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:49.299 18:46:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:49.299 18:46:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:49.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:49.299 18:46:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:49.299 18:46:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:49.557 [2024-07-20 18:46:59.639166] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:49.557 [2024-07-20 18:46:59.639237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1369189 ] 00:16:49.557 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.557 [2024-07-20 18:46:59.701220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.557 [2024-07-20 18:46:59.791284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.816 18:46:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:49.816 18:46:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:16:49.816 18:46:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:50.075 Nvme0n1 00:16:50.075 18:47:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:50.332 [ 00:16:50.332 { 00:16:50.332 "name": "Nvme0n1", 00:16:50.333 "aliases": [ 00:16:50.333 "6e85ceab-21b2-44a6-bd7b-56cd7504b428" 00:16:50.333 ], 00:16:50.333 "product_name": "NVMe disk", 00:16:50.333 "block_size": 4096, 00:16:50.333 "num_blocks": 38912, 00:16:50.333 "uuid": "6e85ceab-21b2-44a6-bd7b-56cd7504b428", 00:16:50.333 "assigned_rate_limits": { 00:16:50.333 "rw_ios_per_sec": 0, 00:16:50.333 "rw_mbytes_per_sec": 0, 00:16:50.333 "r_mbytes_per_sec": 0, 00:16:50.333 "w_mbytes_per_sec": 0 00:16:50.333 }, 00:16:50.333 "claimed": false, 00:16:50.333 "zoned": false, 00:16:50.333 "supported_io_types": { 00:16:50.333 "read": true, 00:16:50.333 "write": true, 00:16:50.333 "unmap": true, 00:16:50.333 "write_zeroes": true, 00:16:50.333 "flush": true, 00:16:50.333 "reset": true, 00:16:50.333 "compare": true, 00:16:50.333 "compare_and_write": true, 00:16:50.333 "abort": true, 00:16:50.333 "nvme_admin": true, 00:16:50.333 "nvme_io": true 00:16:50.333 }, 00:16:50.333 "memory_domains": [ 00:16:50.333 { 00:16:50.333 "dma_device_id": "system", 00:16:50.333 "dma_device_type": 1 00:16:50.333 } 00:16:50.333 ], 00:16:50.333 "driver_specific": { 00:16:50.333 "nvme": [ 00:16:50.333 { 00:16:50.333 "trid": { 00:16:50.333 "trtype": "TCP", 00:16:50.333 "adrfam": "IPv4", 00:16:50.333 "traddr": "10.0.0.2", 00:16:50.333 "trsvcid": "4420", 00:16:50.333 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:50.333 }, 00:16:50.333 "ctrlr_data": { 00:16:50.333 "cntlid": 1, 00:16:50.333 "vendor_id": "0x8086", 00:16:50.333 "model_number": "SPDK bdev Controller", 00:16:50.333 "serial_number": "SPDK0", 00:16:50.333 "firmware_revision": "24.05.1", 00:16:50.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:50.333 "oacs": { 00:16:50.333 "security": 0, 00:16:50.333 "format": 0, 00:16:50.333 "firmware": 0, 00:16:50.333 "ns_manage": 0 00:16:50.333 }, 00:16:50.333 "multi_ctrlr": true, 00:16:50.333 "ana_reporting": false 00:16:50.333 }, 00:16:50.333 "vs": { 00:16:50.333 "nvme_version": "1.3" 00:16:50.333 }, 00:16:50.333 "ns_data": { 00:16:50.333 "id": 1, 00:16:50.333 "can_share": true 00:16:50.333 } 00:16:50.333 } 00:16:50.333 ], 00:16:50.333 "mp_policy": "active_passive" 00:16:50.333 } 00:16:50.333 } 00:16:50.333 ] 00:16:50.333 18:47:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1369247 00:16:50.333 18:47:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:50.333 18:47:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:50.591 Running I/O for 10 seconds... 00:16:51.525 Latency(us) 00:16:51.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.525 Nvme0n1 : 1.00 14681.00 57.35 0.00 0.00 0.00 0.00 0.00 00:16:51.525 =================================================================================================================== 00:16:51.525 Total : 14681.00 57.35 0.00 0.00 0.00 0.00 0.00 00:16:51.525 00:16:52.464 18:47:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 51c0cabc-f213-45a1-b034-5a05c31f904d 00:16:52.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.464 Nvme0n1 : 2.00 14700.50 57.42 0.00 0.00 0.00 0.00 0.00 00:16:52.464 =================================================================================================================== 00:16:52.464 Total : 14700.50 57.42 0.00 0.00 0.00 0.00 0.00 00:16:52.464 00:16:52.722 true 00:16:52.722 18:47:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51c0cabc-f213-45a1-b034-5a05c31f904d 00:16:52.722 18:47:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:52.979 18:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:52.980 18:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:52.980 18:47:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1369247 00:16:53.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:53.546 Nvme0n1 : 3.00 14770.67 57.70 0.00 0.00 0.00 0.00 0.00 00:16:53.546 =================================================================================================================== 00:16:53.546 Total : 14770.67 57.70 0.00 0.00 0.00 0.00 0.00 00:16:53.546 00:16:54.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:54.479 Nvme0n1 : 4.00 14758.00 57.65 0.00 0.00 0.00 0.00 0.00 00:16:54.479 =================================================================================================================== 00:16:54.479 Total : 14758.00 57.65 0.00 0.00 0.00 0.00 0.00 00:16:54.479 00:16:55.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.449 Nvme0n1 : 5.00 14748.00 57.61 0.00 0.00 0.00 0.00 0.00 00:16:55.449 =================================================================================================================== 00:16:55.449 Total : 14748.00 57.61 0.00 0.00 0.00 0.00 0.00 00:16:55.449 00:16:56.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.381 Nvme0n1 : 6.00 14756.17 57.64 0.00 0.00 0.00 0.00 0.00 00:16:56.381 =================================================================================================================== 00:16:56.381 Total : 14756.17 57.64 0.00 0.00 0.00 0.00 0.00 00:16:56.381 00:16:57.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.757 Nvme0n1 : 7.00 14858.71 58.04 0.00 0.00 0.00 0.00 0.00 00:16:57.757 =================================================================================================================== 00:16:57.757 Total : 14858.71 58.04 0.00 0.00 0.00 0.00 0.00 00:16:57.757 00:16:58.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.690 Nvme0n1 : 8.00 14867.00 58.07 0.00 0.00 0.00 0.00 0.00 00:16:58.690 =================================================================================================================== 00:16:58.690 Total : 14867.00 58.07 0.00 0.00 0.00 0.00 0.00 00:16:58.690 00:16:59.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.622 Nvme0n1 : 9.00 14950.22 58.40 0.00 0.00 0.00 0.00 0.00 00:16:59.622 =================================================================================================================== 00:16:59.622 Total : 14950.22 58.40 0.00 0.00 0.00 0.00 0.00 00:16:59.622 00:17:00.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.556 Nvme0n1 : 10.00 14965.60 58.46 0.00 0.00 0.00 0.00 0.00 00:17:00.556 =================================================================================================================== 00:17:00.556 Total : 14965.60 58.46 0.00 0.00 0.00 0.00 0.00 00:17:00.556 00:17:00.556 00:17:00.556 Latency(us) 00:17:00.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.556 Nvme0n1 : 10.01 14969.46 58.47 0.00 0.00 8544.88 5631.24 17767.54 00:17:00.556 =================================================================================================================== 00:17:00.556 Total : 14969.46 58.47 0.00 0.00 8544.88 5631.24 17767.54 00:17:00.556 0 00:17:00.556 18:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1369189 00:17:00.556 18:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 1369189 ']' 00:17:00.556 18:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 1369189 00:17:00.556 18:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:00.556 18:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:00.556 18:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1369189 00:17:00.556 18:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:00.556 18:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:00.556 18:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1369189' 00:17:00.556 killing process with pid 1369189 00:17:00.556 18:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 1369189 00:17:00.556 Received shutdown signal, test time was about 10.000000 seconds 00:17:00.556 00:17:00.556 Latency(us) 00:17:00.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.557 =================================================================================================================== 00:17:00.557 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:00.557 18:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 1369189 00:17:00.814 18:47:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:01.072 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:01.330 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51c0cabc-f213-45a1-b034-5a05c31f904d 00:17:01.330 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:01.588 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:01.588 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:01.588 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1366586 00:17:01.588 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1366586 00:17:01.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1366586 Killed "${NVMF_APP[@]}" "$@" 00:17:01.588 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:01.588 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:01.588 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:01.588 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:01.588 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:01.589 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1370534 00:17:01.589 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:01.589 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1370534 00:17:01.589 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 1370534 ']' 00:17:01.589 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.589 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:01.589 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.589 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:01.589 18:47:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:01.589 [2024-07-20 18:47:11.849111] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:01.589 [2024-07-20 18:47:11.849211] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.589 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.846 [2024-07-20 18:47:11.918536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.846 [2024-07-20 18:47:12.005017] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.846 [2024-07-20 18:47:12.005074] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.846 [2024-07-20 18:47:12.005101] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.846 [2024-07-20 18:47:12.005112] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.846 [2024-07-20 18:47:12.005122] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.846 [2024-07-20 18:47:12.005147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.846 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:01.846 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:01.846 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:01.846 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:01.846 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:01.846 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.846 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:02.105 [2024-07-20 18:47:12.362510] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:02.105 [2024-07-20 18:47:12.362657] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:02.105 [2024-07-20 18:47:12.362716] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:02.105 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:02.105 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6e85ceab-21b2-44a6-bd7b-56cd7504b428 00:17:02.105 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=6e85ceab-21b2-44a6-bd7b-56cd7504b428 00:17:02.105 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:02.105 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:02.105 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:02.105 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:02.105 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:02.364 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6e85ceab-21b2-44a6-bd7b-56cd7504b428 -t 2000 00:17:02.622 [ 00:17:02.622 { 00:17:02.622 "name": "6e85ceab-21b2-44a6-bd7b-56cd7504b428", 00:17:02.622 "aliases": [ 00:17:02.622 "lvs/lvol" 00:17:02.622 ], 00:17:02.622 "product_name": "Logical Volume", 00:17:02.622 "block_size": 4096, 00:17:02.622 "num_blocks": 38912, 00:17:02.622 "uuid": "6e85ceab-21b2-44a6-bd7b-56cd7504b428", 00:17:02.622 "assigned_rate_limits": { 00:17:02.622 "rw_ios_per_sec": 0, 00:17:02.622 "rw_mbytes_per_sec": 0, 00:17:02.622 "r_mbytes_per_sec": 0, 00:17:02.622 "w_mbytes_per_sec": 0 00:17:02.622 }, 00:17:02.622 "claimed": false, 00:17:02.622 "zoned": false, 00:17:02.622 "supported_io_types": { 00:17:02.622 "read": true, 00:17:02.622 "write": true, 00:17:02.622 "unmap": true, 00:17:02.622 "write_zeroes": true, 00:17:02.622 "flush": false, 00:17:02.622 "reset": true, 00:17:02.622 "compare": false, 00:17:02.622 "compare_and_write": false, 00:17:02.622 "abort": false, 00:17:02.622 "nvme_admin": false, 00:17:02.622 "nvme_io": false 00:17:02.622 }, 00:17:02.622 "driver_specific": { 00:17:02.622 "lvol": { 00:17:02.622 "lvol_store_uuid": "51c0cabc-f213-45a1-b034-5a05c31f904d", 00:17:02.622 "base_bdev": "aio_bdev", 00:17:02.622 "thin_provision": false, 00:17:02.622 "num_allocated_clusters": 38, 00:17:02.622 "snapshot": false, 00:17:02.622 "clone": false, 00:17:02.622 "esnap_clone": false 00:17:02.622 } 00:17:02.622 } 00:17:02.622 } 00:17:02.622 ] 00:17:02.622 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:02.622 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51c0cabc-f213-45a1-b034-5a05c31f904d 00:17:02.622 18:47:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:02.879 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:02.879 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51c0cabc-f213-45a1-b034-5a05c31f904d 00:17:02.879 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:03.136 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:03.136 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:03.392 [2024-07-20 18:47:13.667572] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:03.649 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51c0cabc-f213-45a1-b034-5a05c31f904d 00:17:03.649 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:03.649 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51c0cabc-f213-45a1-b034-5a05c31f904d 00:17:03.649 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:03.649 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.649 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:03.649 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.649 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:03.650 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.650 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:03.650 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:03.650 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51c0cabc-f213-45a1-b034-5a05c31f904d 00:17:03.650 request: 00:17:03.650 { 00:17:03.650 "uuid": "51c0cabc-f213-45a1-b034-5a05c31f904d", 00:17:03.650 "method": "bdev_lvol_get_lvstores", 00:17:03.650 "req_id": 1 00:17:03.650 } 00:17:03.650 Got JSON-RPC error response 00:17:03.650 response: 00:17:03.650 { 00:17:03.650 "code": -19, 00:17:03.650 "message": "No such device" 00:17:03.650 } 00:17:03.650 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:03.650 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:03.650 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:03.650 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:03.650 18:47:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:04.234 aio_bdev 00:17:04.234 18:47:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6e85ceab-21b2-44a6-bd7b-56cd7504b428 00:17:04.234 18:47:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=6e85ceab-21b2-44a6-bd7b-56cd7504b428 00:17:04.234 18:47:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:04.234 18:47:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:04.234 18:47:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:04.234 18:47:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:04.234 18:47:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:04.234 18:47:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6e85ceab-21b2-44a6-bd7b-56cd7504b428 -t 2000 00:17:04.798 [ 00:17:04.798 { 00:17:04.798 "name": "6e85ceab-21b2-44a6-bd7b-56cd7504b428", 00:17:04.798 "aliases": [ 00:17:04.798 "lvs/lvol" 00:17:04.798 ], 00:17:04.798 "product_name": "Logical Volume", 00:17:04.798 "block_size": 4096, 00:17:04.798 "num_blocks": 38912, 00:17:04.798 "uuid": "6e85ceab-21b2-44a6-bd7b-56cd7504b428", 00:17:04.798 "assigned_rate_limits": { 00:17:04.798 "rw_ios_per_sec": 0, 00:17:04.798 "rw_mbytes_per_sec": 0, 00:17:04.798 "r_mbytes_per_sec": 0, 00:17:04.798 "w_mbytes_per_sec": 0 00:17:04.798 }, 00:17:04.798 "claimed": false, 00:17:04.798 "zoned": false, 00:17:04.798 "supported_io_types": { 00:17:04.798 "read": true, 00:17:04.798 "write": true, 00:17:04.798 "unmap": true, 00:17:04.798 "write_zeroes": true, 00:17:04.798 "flush": false, 00:17:04.798 "reset": true, 00:17:04.798 "compare": false, 00:17:04.798 "compare_and_write": false, 00:17:04.798 "abort": false, 00:17:04.798 "nvme_admin": false, 00:17:04.798 "nvme_io": false 00:17:04.798 }, 00:17:04.798 "driver_specific": { 00:17:04.798 "lvol": { 00:17:04.798 "lvol_store_uuid": "51c0cabc-f213-45a1-b034-5a05c31f904d", 00:17:04.798 "base_bdev": "aio_bdev", 00:17:04.798 "thin_provision": false, 00:17:04.798 "num_allocated_clusters": 38, 00:17:04.798 "snapshot": false, 00:17:04.798 "clone": false, 00:17:04.798 "esnap_clone": false 00:17:04.798 } 00:17:04.798 } 00:17:04.798 } 00:17:04.798 ] 00:17:04.798 18:47:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:04.798 18:47:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51c0cabc-f213-45a1-b034-5a05c31f904d 00:17:04.798 18:47:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:04.799 18:47:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:04.799 18:47:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51c0cabc-f213-45a1-b034-5a05c31f904d 00:17:04.799 18:47:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:05.056 18:47:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:05.056 18:47:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6e85ceab-21b2-44a6-bd7b-56cd7504b428 00:17:05.314 18:47:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 51c0cabc-f213-45a1-b034-5a05c31f904d 00:17:05.879 18:47:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:05.879 18:47:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:05.879 00:17:05.879 real 0m19.369s 00:17:05.879 user 0m48.682s 00:17:05.879 sys 0m4.753s 00:17:05.879 18:47:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:05.879 18:47:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:05.879 ************************************ 00:17:05.879 END TEST lvs_grow_dirty 00:17:05.879 ************************************ 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:06.137 nvmf_trace.0 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:06.137 rmmod nvme_tcp 00:17:06.137 rmmod nvme_fabrics 00:17:06.137 rmmod nvme_keyring 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1370534 ']' 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1370534 00:17:06.137 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 1370534 ']' 00:17:06.138 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 1370534 00:17:06.138 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:06.138 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:06.138 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1370534 00:17:06.138 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:06.138 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:06.138 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1370534' 00:17:06.138 killing process with pid 1370534 00:17:06.138 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 1370534 00:17:06.138 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 1370534 00:17:06.395 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:06.395 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:06.395 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:06.395 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.395 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.395 18:47:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.395 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.395 18:47:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.296 18:47:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:08.296 00:17:08.296 real 0m42.156s 00:17:08.296 user 1m11.397s 00:17:08.296 sys 0m8.529s 00:17:08.296 18:47:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:08.296 18:47:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:08.296 ************************************ 00:17:08.296 END TEST nvmf_lvs_grow 00:17:08.296 ************************************ 00:17:08.556 18:47:18 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:08.556 18:47:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:08.556 18:47:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:08.556 18:47:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:08.556 ************************************ 00:17:08.556 START TEST nvmf_bdev_io_wait 00:17:08.556 ************************************ 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:08.556 * Looking for test storage... 00:17:08.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.556 18:47:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:10.548 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:10.548 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:10.548 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.548 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:10.548 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:17:10.549 00:17:10.549 --- 10.0.0.2 ping statistics --- 00:17:10.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.549 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:17:10.549 00:17:10.549 --- 10.0.0.1 ping statistics --- 00:17:10.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.549 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1373053 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1373053 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 1373053 ']' 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:10.549 18:47:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:10.807 [2024-07-20 18:47:20.904253] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:10.807 [2024-07-20 18:47:20.904338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.807 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.807 [2024-07-20 18:47:20.971027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:10.807 [2024-07-20 18:47:21.063080] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.807 [2024-07-20 18:47:21.063145] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.807 [2024-07-20 18:47:21.063159] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.807 [2024-07-20 18:47:21.063170] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.807 [2024-07-20 18:47:21.063179] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.807 [2024-07-20 18:47:21.063232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.807 [2024-07-20 18:47:21.063290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.807 [2024-07-20 18:47:21.063357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.807 [2024-07-20 18:47:21.063359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.807 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:10.807 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:10.807 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.807 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.807 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:11.066 [2024-07-20 18:47:21.214686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:11.066 Malloc0 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:11.066 [2024-07-20 18:47:21.277521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1373190 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1373191 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1373194 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:11.066 { 00:17:11.066 "params": { 00:17:11.066 "name": "Nvme$subsystem", 00:17:11.066 "trtype": "$TEST_TRANSPORT", 00:17:11.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.066 "adrfam": "ipv4", 00:17:11.066 "trsvcid": "$NVMF_PORT", 00:17:11.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.066 "hdgst": ${hdgst:-false}, 00:17:11.066 "ddgst": ${ddgst:-false} 00:17:11.066 }, 00:17:11.066 "method": "bdev_nvme_attach_controller" 00:17:11.066 } 00:17:11.066 EOF 00:17:11.066 )") 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:11.066 { 00:17:11.066 "params": { 00:17:11.066 "name": "Nvme$subsystem", 00:17:11.066 "trtype": "$TEST_TRANSPORT", 00:17:11.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.066 "adrfam": "ipv4", 00:17:11.066 "trsvcid": "$NVMF_PORT", 00:17:11.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.066 "hdgst": ${hdgst:-false}, 00:17:11.066 "ddgst": ${ddgst:-false} 00:17:11.066 }, 00:17:11.066 "method": "bdev_nvme_attach_controller" 00:17:11.066 } 00:17:11.066 EOF 00:17:11.066 )") 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1373196 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:11.066 { 00:17:11.066 "params": { 00:17:11.066 "name": "Nvme$subsystem", 00:17:11.066 "trtype": "$TEST_TRANSPORT", 00:17:11.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.066 "adrfam": "ipv4", 00:17:11.066 "trsvcid": "$NVMF_PORT", 00:17:11.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.066 "hdgst": ${hdgst:-false}, 00:17:11.066 "ddgst": ${ddgst:-false} 00:17:11.066 }, 00:17:11.066 "method": "bdev_nvme_attach_controller" 00:17:11.066 } 00:17:11.066 EOF 00:17:11.066 )") 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:11.066 { 00:17:11.066 "params": { 00:17:11.066 "name": "Nvme$subsystem", 00:17:11.066 "trtype": "$TEST_TRANSPORT", 00:17:11.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.066 "adrfam": "ipv4", 00:17:11.066 "trsvcid": "$NVMF_PORT", 00:17:11.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.066 "hdgst": ${hdgst:-false}, 00:17:11.066 "ddgst": ${ddgst:-false} 00:17:11.066 }, 00:17:11.066 "method": "bdev_nvme_attach_controller" 00:17:11.066 } 00:17:11.066 EOF 00:17:11.066 )") 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1373190 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:11.066 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:11.066 "params": { 00:17:11.066 "name": "Nvme1", 00:17:11.066 "trtype": "tcp", 00:17:11.066 "traddr": "10.0.0.2", 00:17:11.066 "adrfam": "ipv4", 00:17:11.066 "trsvcid": "4420", 00:17:11.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.066 "hdgst": false, 00:17:11.066 "ddgst": false 00:17:11.066 }, 00:17:11.066 "method": "bdev_nvme_attach_controller" 00:17:11.066 }' 00:17:11.067 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:11.067 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:11.067 "params": { 00:17:11.067 "name": "Nvme1", 00:17:11.067 "trtype": "tcp", 00:17:11.067 "traddr": "10.0.0.2", 00:17:11.067 "adrfam": "ipv4", 00:17:11.067 "trsvcid": "4420", 00:17:11.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.067 "hdgst": false, 00:17:11.067 "ddgst": false 00:17:11.067 }, 00:17:11.067 "method": "bdev_nvme_attach_controller" 00:17:11.067 }' 00:17:11.067 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:11.067 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:11.067 "params": { 00:17:11.067 "name": "Nvme1", 00:17:11.067 "trtype": "tcp", 00:17:11.067 "traddr": "10.0.0.2", 00:17:11.067 "adrfam": "ipv4", 00:17:11.067 "trsvcid": "4420", 00:17:11.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.067 "hdgst": false, 00:17:11.067 "ddgst": false 00:17:11.067 }, 00:17:11.067 "method": "bdev_nvme_attach_controller" 00:17:11.067 }' 00:17:11.067 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:11.067 18:47:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:11.067 "params": { 00:17:11.067 "name": "Nvme1", 00:17:11.067 "trtype": "tcp", 00:17:11.067 "traddr": "10.0.0.2", 00:17:11.067 "adrfam": "ipv4", 00:17:11.067 "trsvcid": "4420", 00:17:11.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.067 "hdgst": false, 00:17:11.067 "ddgst": false 00:17:11.067 }, 00:17:11.067 "method": "bdev_nvme_attach_controller" 00:17:11.067 }' 00:17:11.067 [2024-07-20 18:47:21.324758] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:11.067 [2024-07-20 18:47:21.324852] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:11.067 [2024-07-20 18:47:21.325305] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:11.067 [2024-07-20 18:47:21.325305] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:11.067 [2024-07-20 18:47:21.325386] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-20 18:47:21.325386] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:11.067 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:11.067 [2024-07-20 18:47:21.325378] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:11.067 [2024-07-20 18:47:21.325458] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:11.067 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.325 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.326 [2024-07-20 18:47:21.500024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.326 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.326 [2024-07-20 18:47:21.574957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:11.326 [2024-07-20 18:47:21.599512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.584 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.584 [2024-07-20 18:47:21.674702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:11.584 [2024-07-20 18:47:21.697378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.584 [2024-07-20 18:47:21.768219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.584 [2024-07-20 18:47:21.771799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:11.584 [2024-07-20 18:47:21.835701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:11.843 Running I/O for 1 seconds... 00:17:11.843 Running I/O for 1 seconds... 00:17:11.843 Running I/O for 1 seconds... 00:17:11.843 Running I/O for 1 seconds... 00:17:12.782 00:17:12.782 Latency(us) 00:17:12.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.782 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:12.782 Nvme1n1 : 1.00 193887.82 757.37 0.00 0.00 657.59 271.55 879.88 00:17:12.782 =================================================================================================================== 00:17:12.782 Total : 193887.82 757.37 0.00 0.00 657.59 271.55 879.88 00:17:12.782 00:17:12.782 Latency(us) 00:17:12.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.782 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:12.782 Nvme1n1 : 1.02 7720.31 30.16 0.00 0.00 16422.41 7233.23 26796.94 00:17:12.782 =================================================================================================================== 00:17:12.782 Total : 7720.31 30.16 0.00 0.00 16422.41 7233.23 26796.94 00:17:13.040 00:17:13.040 Latency(us) 00:17:13.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.040 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:13.040 Nvme1n1 : 1.03 4928.45 19.25 0.00 0.00 25643.31 7718.68 35923.44 00:17:13.040 =================================================================================================================== 00:17:13.040 Total : 4928.45 19.25 0.00 0.00 25643.31 7718.68 35923.44 00:17:13.040 00:17:13.040 Latency(us) 00:17:13.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.040 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:13.040 Nvme1n1 : 1.01 8022.05 31.34 0.00 0.00 15899.24 6941.96 32816.55 00:17:13.040 =================================================================================================================== 00:17:13.040 Total : 8022.05 31.34 0.00 0.00 15899.24 6941.96 32816.55 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1373191 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1373194 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1373196 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:13.296 rmmod nvme_tcp 00:17:13.296 rmmod nvme_fabrics 00:17:13.296 rmmod nvme_keyring 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.296 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:13.297 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:13.297 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1373053 ']' 00:17:13.297 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1373053 00:17:13.297 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 1373053 ']' 00:17:13.297 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 1373053 00:17:13.297 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:13.297 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:13.297 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1373053 00:17:13.297 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:13.297 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:13.297 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1373053' 00:17:13.297 killing process with pid 1373053 00:17:13.297 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 1373053 00:17:13.297 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 1373053 00:17:13.555 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.555 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.555 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.555 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.555 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.555 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.555 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.555 18:47:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.455 18:47:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:15.713 00:17:15.713 real 0m7.115s 00:17:15.713 user 0m15.901s 00:17:15.713 sys 0m3.419s 00:17:15.713 18:47:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:15.713 18:47:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:15.713 ************************************ 00:17:15.713 END TEST nvmf_bdev_io_wait 00:17:15.713 ************************************ 00:17:15.713 18:47:25 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:15.713 18:47:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:15.713 18:47:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:15.713 18:47:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:15.713 ************************************ 00:17:15.713 START TEST nvmf_queue_depth 00:17:15.713 ************************************ 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:15.713 * Looking for test storage... 00:17:15.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.713 18:47:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:15.714 18:47:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:17.615 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:17.615 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:17.615 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:17.615 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:17.615 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:17.616 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.876 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.876 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.876 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:17.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:17:17.876 00:17:17.876 --- 10.0.0.2 ping statistics --- 00:17:17.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.876 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:17:17.876 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:17:17.876 00:17:17.876 --- 10.0.0.1 ping statistics --- 00:17:17.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.876 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:17:17.876 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.876 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:17.876 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:17.876 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.876 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:17.876 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:17.876 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.876 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:17.876 18:47:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:17.876 18:47:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:17.876 18:47:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:17.876 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:17.876 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:17.876 18:47:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1375411 00:17:17.876 18:47:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:17.876 18:47:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1375411 00:17:17.876 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1375411 ']' 00:17:17.876 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.876 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:17.876 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.876 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:17.876 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:17.876 [2024-07-20 18:47:28.068587] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:17.876 [2024-07-20 18:47:28.068673] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.876 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.876 [2024-07-20 18:47:28.134031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.135 [2024-07-20 18:47:28.219000] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.135 [2024-07-20 18:47:28.219047] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.135 [2024-07-20 18:47:28.219070] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.135 [2024-07-20 18:47:28.219097] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.135 [2024-07-20 18:47:28.219106] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.135 [2024-07-20 18:47:28.219144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.135 [2024-07-20 18:47:28.349486] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.135 Malloc0 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.135 [2024-07-20 18:47:28.412705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1375433 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1375433 /var/tmp/bdevperf.sock 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1375433 ']' 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:18.135 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.135 [2024-07-20 18:47:28.458580] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:18.394 [2024-07-20 18:47:28.458679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375433 ] 00:17:18.394 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.394 [2024-07-20 18:47:28.521087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.394 [2024-07-20 18:47:28.611285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.652 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:18.652 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:18.652 18:47:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:18.652 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.652 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.652 NVMe0n1 00:17:18.652 18:47:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.652 18:47:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:18.652 Running I/O for 10 seconds... 00:17:30.849 00:17:30.849 Latency(us) 00:17:30.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.849 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:30.849 Verification LBA range: start 0x0 length 0x4000 00:17:30.849 NVMe0n1 : 10.07 8758.13 34.21 0.00 0.00 116419.45 11116.85 76118.85 00:17:30.849 =================================================================================================================== 00:17:30.849 Total : 8758.13 34.21 0.00 0.00 116419.45 11116.85 76118.85 00:17:30.849 0 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1375433 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1375433 ']' 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1375433 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1375433 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1375433' 00:17:30.849 killing process with pid 1375433 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1375433 00:17:30.849 Received shutdown signal, test time was about 10.000000 seconds 00:17:30.849 00:17:30.849 Latency(us) 00:17:30.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.849 =================================================================================================================== 00:17:30.849 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1375433 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.849 rmmod nvme_tcp 00:17:30.849 rmmod nvme_fabrics 00:17:30.849 rmmod nvme_keyring 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1375411 ']' 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1375411 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1375411 ']' 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1375411 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1375411 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1375411' 00:17:30.849 killing process with pid 1375411 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1375411 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1375411 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.849 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.850 18:47:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.415 18:47:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:31.415 00:17:31.415 real 0m15.818s 00:17:31.415 user 0m22.191s 00:17:31.415 sys 0m3.009s 00:17:31.415 18:47:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:31.415 18:47:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:31.415 ************************************ 00:17:31.415 END TEST nvmf_queue_depth 00:17:31.415 ************************************ 00:17:31.415 18:47:41 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:31.415 18:47:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:31.416 18:47:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:31.416 18:47:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:31.416 ************************************ 00:17:31.416 START TEST nvmf_target_multipath 00:17:31.416 ************************************ 00:17:31.416 18:47:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:31.674 * Looking for test storage... 00:17:31.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:31.674 18:47:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:31.675 18:47:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:31.675 18:47:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.675 18:47:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:31.675 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:31.675 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.675 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:31.675 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:31.675 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:31.675 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.675 18:47:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.675 18:47:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.675 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:31.675 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:31.675 18:47:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:31.675 18:47:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:33.580 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:33.580 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:33.580 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:33.580 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:33.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:17:33.580 00:17:33.580 --- 10.0.0.2 ping statistics --- 00:17:33.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.580 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:17:33.580 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:17:33.838 00:17:33.838 --- 10.0.0.1 ping statistics --- 00:17:33.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.838 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:33.838 only one NIC for nvmf test 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:33.838 rmmod nvme_tcp 00:17:33.838 rmmod nvme_fabrics 00:17:33.838 rmmod nvme_keyring 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.838 18:47:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:35.736 00:17:35.736 real 0m4.345s 00:17:35.736 user 0m0.820s 00:17:35.736 sys 0m1.506s 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:35.736 18:47:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:35.736 ************************************ 00:17:35.736 END TEST nvmf_target_multipath 00:17:35.736 ************************************ 00:17:35.736 18:47:46 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:35.736 18:47:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:35.736 18:47:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:35.736 18:47:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:35.994 ************************************ 00:17:35.994 START TEST nvmf_zcopy 00:17:35.994 ************************************ 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:35.994 * Looking for test storage... 00:17:35.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.994 18:47:46 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:35.995 18:47:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:37.899 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:37.899 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:37.899 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:37.899 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.899 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.158 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:38.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:17:38.159 00:17:38.159 --- 10.0.0.2 ping statistics --- 00:17:38.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.159 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:17:38.159 00:17:38.159 --- 10.0.0.1 ping statistics --- 00:17:38.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.159 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1380486 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1380486 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 1380486 ']' 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:38.159 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:38.159 [2024-07-20 18:47:48.382828] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:38.159 [2024-07-20 18:47:48.382905] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.159 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.159 [2024-07-20 18:47:48.451119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.417 [2024-07-20 18:47:48.538995] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.417 [2024-07-20 18:47:48.539056] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.417 [2024-07-20 18:47:48.539084] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.417 [2024-07-20 18:47:48.539096] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.417 [2024-07-20 18:47:48.539106] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.417 [2024-07-20 18:47:48.539144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.417 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:38.417 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:17:38.417 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:38.417 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.417 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:38.417 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.417 18:47:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:38.417 18:47:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:38.417 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:38.418 [2024-07-20 18:47:48.683615] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:38.418 [2024-07-20 18:47:48.699880] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:38.418 malloc0 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:38.418 { 00:17:38.418 "params": { 00:17:38.418 "name": "Nvme$subsystem", 00:17:38.418 "trtype": "$TEST_TRANSPORT", 00:17:38.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:38.418 "adrfam": "ipv4", 00:17:38.418 "trsvcid": "$NVMF_PORT", 00:17:38.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:38.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:38.418 "hdgst": ${hdgst:-false}, 00:17:38.418 "ddgst": ${ddgst:-false} 00:17:38.418 }, 00:17:38.418 "method": "bdev_nvme_attach_controller" 00:17:38.418 } 00:17:38.418 EOF 00:17:38.418 )") 00:17:38.418 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:38.676 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:38.676 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:38.676 18:47:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:38.676 "params": { 00:17:38.676 "name": "Nvme1", 00:17:38.676 "trtype": "tcp", 00:17:38.676 "traddr": "10.0.0.2", 00:17:38.676 "adrfam": "ipv4", 00:17:38.676 "trsvcid": "4420", 00:17:38.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.676 "hdgst": false, 00:17:38.676 "ddgst": false 00:17:38.676 }, 00:17:38.676 "method": "bdev_nvme_attach_controller" 00:17:38.676 }' 00:17:38.676 [2024-07-20 18:47:48.781690] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:38.676 [2024-07-20 18:47:48.781757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380627 ] 00:17:38.676 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.676 [2024-07-20 18:47:48.844860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.676 [2024-07-20 18:47:48.941845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.936 Running I/O for 10 seconds... 00:17:48.937 00:17:48.937 Latency(us) 00:17:48.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.937 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:48.937 Verification LBA range: start 0x0 length 0x1000 00:17:48.937 Nvme1n1 : 10.02 5105.73 39.89 0.00 0.00 25002.20 4029.25 43884.85 00:17:48.937 =================================================================================================================== 00:17:48.937 Total : 5105.73 39.89 0.00 0.00 25002.20 4029.25 43884.85 00:17:49.197 18:47:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1381821 00:17:49.197 18:47:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:49.197 18:47:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:49.197 18:47:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:49.197 18:47:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:49.197 18:47:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:49.197 18:47:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:49.197 18:47:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:49.197 18:47:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:49.197 { 00:17:49.197 "params": { 00:17:49.197 "name": "Nvme$subsystem", 00:17:49.197 "trtype": "$TEST_TRANSPORT", 00:17:49.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.197 "adrfam": "ipv4", 00:17:49.197 "trsvcid": "$NVMF_PORT", 00:17:49.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.197 "hdgst": ${hdgst:-false}, 00:17:49.197 "ddgst": ${ddgst:-false} 00:17:49.197 }, 00:17:49.197 "method": "bdev_nvme_attach_controller" 00:17:49.197 } 00:17:49.197 EOF 00:17:49.197 )") 00:17:49.197 18:47:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:49.197 [2024-07-20 18:47:59.430690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.197 [2024-07-20 18:47:59.430735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.197 18:47:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:49.197 18:47:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:49.197 18:47:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:49.197 "params": { 00:17:49.197 "name": "Nvme1", 00:17:49.197 "trtype": "tcp", 00:17:49.197 "traddr": "10.0.0.2", 00:17:49.197 "adrfam": "ipv4", 00:17:49.197 "trsvcid": "4420", 00:17:49.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.197 "hdgst": false, 00:17:49.197 "ddgst": false 00:17:49.197 }, 00:17:49.197 "method": "bdev_nvme_attach_controller" 00:17:49.197 }' 00:17:49.197 [2024-07-20 18:47:59.438648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.197 [2024-07-20 18:47:59.438676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.197 [2024-07-20 18:47:59.446661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.197 [2024-07-20 18:47:59.446686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.197 [2024-07-20 18:47:59.454677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.197 [2024-07-20 18:47:59.454699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.197 [2024-07-20 18:47:59.462695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.197 [2024-07-20 18:47:59.462716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.197 [2024-07-20 18:47:59.469116] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:49.197 [2024-07-20 18:47:59.469176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1381821 ] 00:17:49.197 [2024-07-20 18:47:59.470713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.197 [2024-07-20 18:47:59.470733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.197 [2024-07-20 18:47:59.478734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.197 [2024-07-20 18:47:59.478754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.197 [2024-07-20 18:47:59.486755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.197 [2024-07-20 18:47:59.486790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.197 [2024-07-20 18:47:59.494802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.197 [2024-07-20 18:47:59.494822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.197 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.197 [2024-07-20 18:47:59.502824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.197 [2024-07-20 18:47:59.502860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.197 [2024-07-20 18:47:59.510864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.197 [2024-07-20 18:47:59.510887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.197 [2024-07-20 18:47:59.518862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.197 [2024-07-20 18:47:59.518885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.526898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.526919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.532012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.457 [2024-07-20 18:47:59.534918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.534940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.542967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.543001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.550953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.550974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.558973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.558995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.566995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.567016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.575016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.575037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.583059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.583105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.591099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.591135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.599099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.599124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.607127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.607152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.615152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.615179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.623168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.623192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.623333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.457 [2024-07-20 18:47:59.631187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.631212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.639234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.639278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.647260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.647296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.655280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.655318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.663307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.663344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.671326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.671364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.679345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.679383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.687366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.687404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.695375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.695401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.703428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.703463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.711445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.711482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.719454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.719485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.727462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.727487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.735485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.735510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.743517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.743548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.751537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.751567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.759559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.759588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.767578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.767607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.457 [2024-07-20 18:47:59.775595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.457 [2024-07-20 18:47:59.775620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.783632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.783657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.791649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.791689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.800020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.800047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.807697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.807724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 Running I/O for 5 seconds... 00:17:49.715 [2024-07-20 18:47:59.815717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.815743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.832290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.832321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.844572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.844604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.856586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.856618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.868267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.868295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.879277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.879304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.890475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.890502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.903595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.903624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.913959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.913987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.924921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.924947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.939246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.939274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.950981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.951024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.962583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.962611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.975613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.975644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.986530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.986556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:47:59.997116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:47:59.997143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:48:00.007122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:48:00.007163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:48:00.021509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:48:00.021554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.715 [2024-07-20 18:48:00.031727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.715 [2024-07-20 18:48:00.031756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.046852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.046881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.056970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.056998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.069995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.070023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.082234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.082261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.093419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.093450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.103602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.103630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.115546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.115575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.125398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.125427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.137445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.137474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.147716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.147744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.162604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.162634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.175437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.175465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.185762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.185812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.197679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.197711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.208601] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.208628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.218479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.218507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.232013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.232041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.246305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.246348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.258557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.258584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.270728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.270756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.281386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.281414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.973 [2024-07-20 18:48:00.296142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.973 [2024-07-20 18:48:00.296171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.307103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.307131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.316747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.316776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.328213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.328242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.340435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.340463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.354160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.354189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.366196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.366225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.377535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.377564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.391255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.391281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.402762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.402815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.413759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.413811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.424376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.424403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.438309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.438337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.450085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.450111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.461710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.461738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.472227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.472255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.482681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.482709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.492261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.492289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.502775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.502816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.513084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.513113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.528423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.528452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.539476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.539505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.231 [2024-07-20 18:48:00.550802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.231 [2024-07-20 18:48:00.550838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.562681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.562708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.573122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.573150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.588229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.588258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.599487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.599516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.614487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.614516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.627759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.627809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.640215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.640244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.650500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.650544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.664239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.664267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.674766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.674822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.687668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.687697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.702116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.702161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.713034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.713062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.723412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.723438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.737354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.737384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.751191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.751220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.765887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.765916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.777721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.777749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.789597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.789624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.489 [2024-07-20 18:48:00.799732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.489 [2024-07-20 18:48:00.799761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:00.813942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:00.813971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:00.826501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:00.826529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:00.840783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:00.840822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:00.854773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:00.854818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:00.866101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:00.866129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:00.879458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:00.879487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:00.889697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:00.889723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:00.904050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:00.904079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:00.918433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:00.918462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:00.930358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:00.930386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:00.944333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:00.944362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:00.954321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:00.954349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:00.967321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:00.967350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:00.982216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:00.982251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:00.996032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:00.996060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:01.006993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:01.007021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:01.016693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:01.016720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:01.030134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:01.030165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:01.042641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:01.042671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:01.053647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.747 [2024-07-20 18:48:01.053673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.747 [2024-07-20 18:48:01.066552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.748 [2024-07-20 18:48:01.066580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.082498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.082526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.093243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.093271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.104524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.104551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.117606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.117634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.128773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.128820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.138373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.138401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.150647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.150695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.164875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.164902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.176091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.176120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.186857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.186884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.201649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.201678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.214946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.214974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.226434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.226462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.239544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.239572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.251260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.251288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.264580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.264608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.277660] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.277692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.290111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.290147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.304008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.304036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.315962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.315989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.006 [2024-07-20 18:48:01.326527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.006 [2024-07-20 18:48:01.326553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.263 [2024-07-20 18:48:01.339331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.263 [2024-07-20 18:48:01.339359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.263 [2024-07-20 18:48:01.350356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.263 [2024-07-20 18:48:01.350383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.263 [2024-07-20 18:48:01.364894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.263 [2024-07-20 18:48:01.364922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.263 [2024-07-20 18:48:01.376772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.263 [2024-07-20 18:48:01.376809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.263 [2024-07-20 18:48:01.390175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.263 [2024-07-20 18:48:01.390210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.263 [2024-07-20 18:48:01.403725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.263 [2024-07-20 18:48:01.403753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.263 [2024-07-20 18:48:01.418324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.264 [2024-07-20 18:48:01.418352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.264 [2024-07-20 18:48:01.430988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.264 [2024-07-20 18:48:01.431015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.264 [2024-07-20 18:48:01.445588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.264 [2024-07-20 18:48:01.445616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.264 [2024-07-20 18:48:01.456812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.264 [2024-07-20 18:48:01.456851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.264 [2024-07-20 18:48:01.468317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.264 [2024-07-20 18:48:01.468345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.264 [2024-07-20 18:48:01.479751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.264 [2024-07-20 18:48:01.479779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.264 [2024-07-20 18:48:01.491277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.264 [2024-07-20 18:48:01.491304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.264 [2024-07-20 18:48:01.502157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.264 [2024-07-20 18:48:01.502183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.264 [2024-07-20 18:48:01.515019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.264 [2024-07-20 18:48:01.515047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.264 [2024-07-20 18:48:01.525719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.264 [2024-07-20 18:48:01.525747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.264 [2024-07-20 18:48:01.539033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.264 [2024-07-20 18:48:01.539061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.264 [2024-07-20 18:48:01.551959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.264 [2024-07-20 18:48:01.551986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.264 [2024-07-20 18:48:01.562404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.264 [2024-07-20 18:48:01.562430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.264 [2024-07-20 18:48:01.575551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.264 [2024-07-20 18:48:01.575579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.264 [2024-07-20 18:48:01.585682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.264 [2024-07-20 18:48:01.585710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.598504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.598531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.611917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.611945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.623610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.623660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.635383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.635409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.648881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.648909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.657917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.657944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.673811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.673838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.684213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.684239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.699099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.699126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.711722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.711750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.722957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.722984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.733529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.733556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.742759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.742787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.754257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.754285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.768360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.768389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.782691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.782719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.799394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.799421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.812158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.812185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.824264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.824290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.521 [2024-07-20 18:48:01.837377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.521 [2024-07-20 18:48:01.837405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:01.848553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:01.848581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:01.859104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:01.859138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:01.870757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:01.870790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:01.882892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:01.882920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:01.894188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:01.894216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:01.905012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:01.905039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:01.915879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:01.915906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:01.925907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:01.925934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:01.937905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:01.937932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:01.949826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:01.949866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:01.962203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:01.962230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:01.972956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:01.972984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:01.986183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:01.986210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:01.998216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:01.998243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:02.011925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:02.011953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:02.025397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:02.025425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:02.039605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:02.039633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:02.049856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:02.049883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:02.061609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:02.061636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:02.071016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:02.071043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:02.081308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:02.081343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.778 [2024-07-20 18:48:02.094037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.778 [2024-07-20 18:48:02.094064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.104507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.104535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.115918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.115946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.128724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.128751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.139205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.139232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.151632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.151660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.164162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.164190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.173821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.173848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.185276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.185303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.194664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.194690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.206419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.206446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.219721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.219748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.230860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.230888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.245268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.245296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.255993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.256021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.265465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.265491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.278671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.278698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.289607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.289638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.301673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.301715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.311650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.311677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.322176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.322202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.333441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.333468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.347572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.347600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.036 [2024-07-20 18:48:02.358780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.036 [2024-07-20 18:48:02.358816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.293 [2024-07-20 18:48:02.371622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.293 [2024-07-20 18:48:02.371650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.293 [2024-07-20 18:48:02.386770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.293 [2024-07-20 18:48:02.386815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.293 [2024-07-20 18:48:02.398892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.293 [2024-07-20 18:48:02.398920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.293 [2024-07-20 18:48:02.410858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.293 [2024-07-20 18:48:02.410885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.293 [2024-07-20 18:48:02.420625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.420653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.294 [2024-07-20 18:48:02.431943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.431970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.294 [2024-07-20 18:48:02.443115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.443142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.294 [2024-07-20 18:48:02.457446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.457473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.294 [2024-07-20 18:48:02.468069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.468095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.294 [2024-07-20 18:48:02.483836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.483863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.294 [2024-07-20 18:48:02.501091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.501121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.294 [2024-07-20 18:48:02.515642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.515670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.294 [2024-07-20 18:48:02.528924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.528951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.294 [2024-07-20 18:48:02.540009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.540036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.294 [2024-07-20 18:48:02.553034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.553062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.294 [2024-07-20 18:48:02.564237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.564264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.294 [2024-07-20 18:48:02.575760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.575808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.294 [2024-07-20 18:48:02.588760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.588810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.294 [2024-07-20 18:48:02.598880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.598907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.294 [2024-07-20 18:48:02.613479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.294 [2024-07-20 18:48:02.613507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.627526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.627567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.642160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.642188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.654147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.654174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.668867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.668894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.679037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.679064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.694606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.694633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.706788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.706823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.717006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.717047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.728435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.728462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.740399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.740426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.752077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.752104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.762463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.762490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.775237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.775263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.787293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.787335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.799904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.799931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.810500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.810527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.823904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.823931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.837456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.837484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.848574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.848601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.859455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.859483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.551 [2024-07-20 18:48:02.870727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.551 [2024-07-20 18:48:02.870753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.807 [2024-07-20 18:48:02.882912] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.807 [2024-07-20 18:48:02.882939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.807 [2024-07-20 18:48:02.895065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.807 [2024-07-20 18:48:02.895093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.807 [2024-07-20 18:48:02.908090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.807 [2024-07-20 18:48:02.908118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:02.919347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:02.919377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:02.936158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:02.936186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:02.947335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:02.947361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:02.957582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:02.957609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:02.966788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:02.966824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:02.977195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:02.977222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:02.987659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:02.987695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:03.003103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:03.003132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:03.013515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:03.013542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:03.026282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:03.026309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:03.040235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:03.040262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:03.053776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:03.053814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:03.064021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:03.064049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:03.077184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:03.077211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:03.087698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:03.087726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:03.098677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:03.098704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:03.112193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:03.112221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.808 [2024-07-20 18:48:03.123994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.808 [2024-07-20 18:48:03.124020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.133192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.133219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.143331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.143358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.157263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.157292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.166959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.166985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.182481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.182514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.192784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.192820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.204261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.204289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.215175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.215213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.228986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.229013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.239470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.239498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.249320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.249348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.261580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.261607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.272698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.272726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.287487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.287514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.297745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.297773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.311895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.311923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.321638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.321665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.336469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.336497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.350976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.351003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.362377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.362404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.375007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.375034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.064 [2024-07-20 18:48:03.384716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.064 [2024-07-20 18:48:03.384744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.395086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.395113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.405061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.405089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.418669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.418696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.429199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.429225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.441874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.441911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.453938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.453965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.463143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.463171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.477773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.477809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.488518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.488545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.499245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.499272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.514513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.514541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.525178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.525205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.538007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.538034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.553916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.553943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.564713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.564741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.574615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.574642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.584998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.585025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.599609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.599636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.609143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.609170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.621220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.621247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.321 [2024-07-20 18:48:03.631316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.321 [2024-07-20 18:48:03.631344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.646638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.646666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.658326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.658351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.672497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.672535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.685132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.685159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.695746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.695787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.709248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.709275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.720468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.720494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.733895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.733933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.743885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.743912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.758466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.758493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.771629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.771657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.781916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.781943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.792280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.792307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.808419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.808446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.818113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.818154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.832703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.832730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.848679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.848706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.859857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.859884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.874469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.874496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.885917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.885944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.578 [2024-07-20 18:48:03.901101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.578 [2024-07-20 18:48:03.901127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:03.913526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:03.913564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:03.927144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:03.927171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:03.939017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:03.939045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:03.949155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:03.949196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:03.961205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:03.961232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:03.972286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:03.972328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:03.985123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:03.985150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:03.996063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:03.996104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:04.008411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:04.008438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:04.022252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:04.022280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:04.033155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:04.033182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:04.045878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:04.045905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:04.057678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:04.057706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:04.067684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:04.067712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:04.080340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:04.080381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:04.091756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:04.091784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:04.103459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:04.103486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.864 [2024-07-20 18:48:04.116417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.864 [2024-07-20 18:48:04.116444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.865 [2024-07-20 18:48:04.129026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.865 [2024-07-20 18:48:04.129053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.865 [2024-07-20 18:48:04.142409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.865 [2024-07-20 18:48:04.142437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.865 [2024-07-20 18:48:04.153134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.865 [2024-07-20 18:48:04.153162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.865 [2024-07-20 18:48:04.165659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.865 [2024-07-20 18:48:04.165686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.122 [2024-07-20 18:48:04.176963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.176991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.189441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.189469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.201292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.201319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.214496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.214524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.227856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.227884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.239030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.239057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.249944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.249971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.260161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.260188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.271857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.271884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.283354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.283380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.293181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.293207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.303648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.303675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.314895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.314922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.327114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.327141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.338326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.338354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.347975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.348002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.362115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.362143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.374061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.374088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.391510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.391537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.403531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.403559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.414468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.414510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.428493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.428531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.123 [2024-07-20 18:48:04.442927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.123 [2024-07-20 18:48:04.442954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.454240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.454267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.467359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.467387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.479230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.479257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.490126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.490167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.501191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.501218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.514032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.514059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.524739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.524766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.538136] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.538163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.549523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.549550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.560474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.560501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.573299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.573327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.584817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.584844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.598617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.598644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.612882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.612909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.623767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.623809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.634098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.634126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.644733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.644760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.655526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.655553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.670238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.670265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.680404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.680431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.381 [2024-07-20 18:48:04.694500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.381 [2024-07-20 18:48:04.694528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.707964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.707993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.719195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.719222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.731249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.731276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.743311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.743339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.753905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.753934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.764408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.764436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.778273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.778300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.788230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.788257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.802497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.802524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.816746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.816785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.833000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.833028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.841416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.841442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 00:17:54.646 Latency(us) 00:17:54.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.646 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:54.646 Nvme1n1 : 5.02 10486.10 81.92 0.00 0.00 12185.28 4029.25 26991.12 00:17:54.646 =================================================================================================================== 00:17:54.646 Total : 10486.10 81.92 0.00 0.00 12185.28 4029.25 26991.12 00:17:54.646 [2024-07-20 18:48:04.847258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.847281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.855253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.855279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.863291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.863325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.871351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.871398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.879373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.879419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.887396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.887442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.895397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.895440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.903433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.903478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.911451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.911494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.919489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.919542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.927495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.927538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.935509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.935562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.943556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.943600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.951558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.951616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.959580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.959623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.646 [2024-07-20 18:48:04.967610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.646 [2024-07-20 18:48:04.967652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.904 [2024-07-20 18:48:04.975629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.904 [2024-07-20 18:48:04.975674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.904 [2024-07-20 18:48:04.983637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.904 [2024-07-20 18:48:04.983679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.904 [2024-07-20 18:48:04.991634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.904 [2024-07-20 18:48:04.991661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.904 [2024-07-20 18:48:04.999657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.905 [2024-07-20 18:48:04.999688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.905 [2024-07-20 18:48:05.007714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.905 [2024-07-20 18:48:05.007759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.905 [2024-07-20 18:48:05.015728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.905 [2024-07-20 18:48:05.015772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.905 [2024-07-20 18:48:05.023736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.905 [2024-07-20 18:48:05.023772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.905 [2024-07-20 18:48:05.031743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.905 [2024-07-20 18:48:05.031770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.905 [2024-07-20 18:48:05.039809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.905 [2024-07-20 18:48:05.039855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.905 [2024-07-20 18:48:05.047844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.905 [2024-07-20 18:48:05.047888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.905 [2024-07-20 18:48:05.055866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.905 [2024-07-20 18:48:05.055901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.905 [2024-07-20 18:48:05.063852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.905 [2024-07-20 18:48:05.063875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.905 [2024-07-20 18:48:05.071874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.905 [2024-07-20 18:48:05.071897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.905 [2024-07-20 18:48:05.079889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.905 [2024-07-20 18:48:05.079912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1381821) - No such process 00:17:54.905 18:48:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1381821 00:17:54.905 18:48:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:54.905 18:48:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.905 18:48:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:54.905 18:48:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.905 18:48:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:54.905 18:48:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.905 18:48:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:54.905 delay0 00:17:54.905 18:48:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.905 18:48:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:54.905 18:48:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.905 18:48:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:54.905 18:48:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.905 18:48:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:54.905 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.905 [2024-07-20 18:48:05.168500] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:01.461 Initializing NVMe Controllers 00:18:01.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:01.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:01.461 Initialization complete. Launching workers. 00:18:01.461 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 55 00:18:01.461 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 342, failed to submit 33 00:18:01.461 success 115, unsuccess 227, failed 0 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:01.461 rmmod nvme_tcp 00:18:01.461 rmmod nvme_fabrics 00:18:01.461 rmmod nvme_keyring 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1380486 ']' 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1380486 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 1380486 ']' 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 1380486 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1380486 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1380486' 00:18:01.461 killing process with pid 1380486 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 1380486 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 1380486 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.461 18:48:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.989 18:48:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:03.989 00:18:03.989 real 0m27.609s 00:18:03.989 user 0m40.858s 00:18:03.989 sys 0m8.326s 00:18:03.989 18:48:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:03.989 18:48:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:03.989 ************************************ 00:18:03.989 END TEST nvmf_zcopy 00:18:03.989 ************************************ 00:18:03.989 18:48:13 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:03.989 18:48:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:03.989 18:48:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:03.989 18:48:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:03.989 ************************************ 00:18:03.989 START TEST nvmf_nmic 00:18:03.989 ************************************ 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:03.989 * Looking for test storage... 00:18:03.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.989 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:03.990 18:48:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:05.888 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:05.888 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:05.888 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:05.888 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:05.888 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:05.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:18:05.889 00:18:05.889 --- 10.0.0.2 ping statistics --- 00:18:05.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.889 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:18:05.889 00:18:05.889 --- 10.0.0.1 ping statistics --- 00:18:05.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.889 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1385809 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1385809 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 1385809 ']' 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:05.889 18:48:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:05.889 [2024-07-20 18:48:15.898201] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:05.889 [2024-07-20 18:48:15.898267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.889 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.889 [2024-07-20 18:48:15.966024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:05.889 [2024-07-20 18:48:16.068007] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.889 [2024-07-20 18:48:16.068074] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.889 [2024-07-20 18:48:16.068091] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.889 [2024-07-20 18:48:16.068106] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.889 [2024-07-20 18:48:16.068118] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.889 [2024-07-20 18:48:16.068182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.889 [2024-07-20 18:48:16.068234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.889 [2024-07-20 18:48:16.068288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:05.889 [2024-07-20 18:48:16.068291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.889 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:05.889 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:05.889 18:48:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.889 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.889 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:06.146 [2024-07-20 18:48:16.226609] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:06.146 Malloc0 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:06.146 [2024-07-20 18:48:16.280121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:06.146 test case1: single bdev can't be used in multiple subsystems 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:06.146 [2024-07-20 18:48:16.303909] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:06.146 [2024-07-20 18:48:16.303939] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:06.146 [2024-07-20 18:48:16.303954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.146 request: 00:18:06.146 { 00:18:06.146 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:06.146 "namespace": { 00:18:06.146 "bdev_name": "Malloc0", 00:18:06.146 "no_auto_visible": false 00:18:06.146 }, 00:18:06.146 "method": "nvmf_subsystem_add_ns", 00:18:06.146 "req_id": 1 00:18:06.146 } 00:18:06.146 Got JSON-RPC error response 00:18:06.146 response: 00:18:06.146 { 00:18:06.146 "code": -32602, 00:18:06.146 "message": "Invalid parameters" 00:18:06.146 } 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:06.146 Adding namespace failed - expected result. 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:06.146 test case2: host connect to nvmf target in multiple paths 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:06.146 [2024-07-20 18:48:16.312015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.146 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:06.709 18:48:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:07.273 18:48:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:07.273 18:48:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:07.273 18:48:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:07.273 18:48:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:07.273 18:48:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:09.795 18:48:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:09.795 18:48:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:09.795 18:48:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:09.795 18:48:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:09.795 18:48:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.795 18:48:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:09.795 18:48:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:09.795 [global] 00:18:09.795 thread=1 00:18:09.795 invalidate=1 00:18:09.795 rw=write 00:18:09.795 time_based=1 00:18:09.795 runtime=1 00:18:09.795 ioengine=libaio 00:18:09.795 direct=1 00:18:09.795 bs=4096 00:18:09.795 iodepth=1 00:18:09.795 norandommap=0 00:18:09.795 numjobs=1 00:18:09.795 00:18:09.795 verify_dump=1 00:18:09.795 verify_backlog=512 00:18:09.795 verify_state_save=0 00:18:09.795 do_verify=1 00:18:09.795 verify=crc32c-intel 00:18:09.795 [job0] 00:18:09.795 filename=/dev/nvme0n1 00:18:09.795 Could not set queue depth (nvme0n1) 00:18:09.795 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:09.795 fio-3.35 00:18:09.795 Starting 1 thread 00:18:10.728 00:18:10.728 job0: (groupid=0, jobs=1): err= 0: pid=1386320: Sat Jul 20 18:48:20 2024 00:18:10.728 read: IOPS=498, BW=1992KiB/s (2040kB/s)(2024KiB/1016msec) 00:18:10.728 slat (nsec): min=7302, max=34811, avg=14258.99, stdev=2708.71 00:18:10.728 clat (usec): min=467, max=41138, avg=1513.96, stdev=6160.09 00:18:10.728 lat (usec): min=476, max=41152, avg=1528.22, stdev=6161.25 00:18:10.728 clat percentiles (usec): 00:18:10.728 | 1.00th=[ 486], 5.00th=[ 502], 10.00th=[ 510], 20.00th=[ 515], 00:18:10.728 | 30.00th=[ 519], 40.00th=[ 523], 50.00th=[ 529], 60.00th=[ 529], 00:18:10.728 | 70.00th=[ 545], 80.00th=[ 644], 90.00th=[ 676], 95.00th=[ 709], 00:18:10.728 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:10.728 | 99.99th=[41157] 00:18:10.728 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:18:10.728 slat (usec): min=7, max=31092, avg=81.54, stdev=1373.23 00:18:10.728 clat (usec): min=296, max=2105, avg=382.67, stdev=108.41 00:18:10.728 lat (usec): min=303, max=31566, avg=464.21, stdev=1381.70 00:18:10.728 clat percentiles (usec): 00:18:10.728 | 1.00th=[ 310], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 330], 00:18:10.728 | 30.00th=[ 343], 40.00th=[ 359], 50.00th=[ 371], 60.00th=[ 388], 00:18:10.728 | 70.00th=[ 400], 80.00th=[ 416], 90.00th=[ 433], 95.00th=[ 465], 00:18:10.728 | 99.00th=[ 537], 99.50th=[ 578], 99.90th=[ 2114], 99.95th=[ 2114], 00:18:10.728 | 99.99th=[ 2114] 00:18:10.728 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:10.728 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:10.728 lat (usec) : 500=51.28%, 750=47.25%, 1000=0.10% 00:18:10.728 lat (msec) : 2=0.10%, 4=0.10%, 50=1.18% 00:18:10.728 cpu : usr=1.38%, sys=1.87%, ctx=1021, majf=0, minf=2 00:18:10.728 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:10.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.728 issued rwts: total=506,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.728 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:10.728 00:18:10.728 Run status group 0 (all jobs): 00:18:10.728 READ: bw=1992KiB/s (2040kB/s), 1992KiB/s-1992KiB/s (2040kB/s-2040kB/s), io=2024KiB (2073kB), run=1016-1016msec 00:18:10.728 WRITE: bw=2016KiB/s (2064kB/s), 2016KiB/s-2016KiB/s (2064kB/s-2064kB/s), io=2048KiB (2097kB), run=1016-1016msec 00:18:10.728 00:18:10.728 Disk stats (read/write): 00:18:10.728 nvme0n1: ios=529/512, merge=0/0, ticks=1625/192, in_queue=1817, util=98.90% 00:18:10.728 18:48:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:10.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:10.728 18:48:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:10.728 18:48:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:10.728 18:48:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:10.728 18:48:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:10.728 18:48:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:10.728 18:48:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:10.728 18:48:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:10.728 18:48:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:10.728 18:48:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:10.728 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:10.728 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:10.728 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:10.728 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:10.728 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:10.728 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:10.728 rmmod nvme_tcp 00:18:10.728 rmmod nvme_fabrics 00:18:10.728 rmmod nvme_keyring 00:18:10.728 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:10.987 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:10.987 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:10.987 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1385809 ']' 00:18:10.987 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1385809 00:18:10.987 18:48:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 1385809 ']' 00:18:10.987 18:48:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 1385809 00:18:10.987 18:48:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:10.987 18:48:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:10.987 18:48:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1385809 00:18:10.987 18:48:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:10.987 18:48:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:10.987 18:48:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1385809' 00:18:10.987 killing process with pid 1385809 00:18:10.987 18:48:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 1385809 00:18:10.987 18:48:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 1385809 00:18:11.246 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:11.246 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:11.246 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:11.246 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.246 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:11.246 18:48:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.246 18:48:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.246 18:48:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.149 18:48:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:13.149 00:18:13.149 real 0m9.649s 00:18:13.149 user 0m21.887s 00:18:13.149 sys 0m2.196s 00:18:13.149 18:48:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:13.149 18:48:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:13.149 ************************************ 00:18:13.149 END TEST nvmf_nmic 00:18:13.149 ************************************ 00:18:13.149 18:48:23 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:13.149 18:48:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:13.149 18:48:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:13.149 18:48:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:13.149 ************************************ 00:18:13.149 START TEST nvmf_fio_target 00:18:13.149 ************************************ 00:18:13.149 18:48:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:13.408 * Looking for test storage... 00:18:13.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:13.409 18:48:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.362 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:15.363 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:15.363 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:15.363 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:15.363 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:15.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:18:15.363 00:18:15.363 --- 10.0.0.2 ping statistics --- 00:18:15.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.363 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:15.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:18:15.363 00:18:15.363 --- 10.0.0.1 ping statistics --- 00:18:15.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.363 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1388394 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1388394 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 1388394 ']' 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:15.363 18:48:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.621 [2024-07-20 18:48:25.688225] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:15.621 [2024-07-20 18:48:25.688317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.621 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.621 [2024-07-20 18:48:25.759880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:15.621 [2024-07-20 18:48:25.855404] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.621 [2024-07-20 18:48:25.855468] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.621 [2024-07-20 18:48:25.855495] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.621 [2024-07-20 18:48:25.855510] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.621 [2024-07-20 18:48:25.855522] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.621 [2024-07-20 18:48:25.855614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.621 [2024-07-20 18:48:25.855669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.621 [2024-07-20 18:48:25.855720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.621 [2024-07-20 18:48:25.855722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.878 18:48:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:15.879 18:48:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:15.879 18:48:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:15.879 18:48:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.879 18:48:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.879 18:48:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.879 18:48:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:16.136 [2024-07-20 18:48:26.293486] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.136 18:48:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:16.393 18:48:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:16.394 18:48:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:16.651 18:48:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:16.651 18:48:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:16.908 18:48:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:16.908 18:48:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:17.164 18:48:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:17.164 18:48:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:17.421 18:48:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:17.678 18:48:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:17.679 18:48:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:17.936 18:48:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:17.936 18:48:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:18.192 18:48:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:18.192 18:48:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:18.449 18:48:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:18.705 18:48:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:18.705 18:48:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:18.962 18:48:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:18.962 18:48:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:19.219 18:48:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:19.477 [2024-07-20 18:48:29.701981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.477 18:48:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:19.735 18:48:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:19.993 18:48:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:20.559 18:48:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:20.559 18:48:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:20.559 18:48:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:20.559 18:48:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:20.559 18:48:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:20.559 18:48:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:23.084 18:48:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:23.084 18:48:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:23.084 18:48:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:23.084 18:48:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:23.084 18:48:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.084 18:48:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:23.084 18:48:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:23.084 [global] 00:18:23.084 thread=1 00:18:23.084 invalidate=1 00:18:23.084 rw=write 00:18:23.084 time_based=1 00:18:23.084 runtime=1 00:18:23.084 ioengine=libaio 00:18:23.084 direct=1 00:18:23.084 bs=4096 00:18:23.084 iodepth=1 00:18:23.084 norandommap=0 00:18:23.084 numjobs=1 00:18:23.084 00:18:23.084 verify_dump=1 00:18:23.084 verify_backlog=512 00:18:23.084 verify_state_save=0 00:18:23.084 do_verify=1 00:18:23.084 verify=crc32c-intel 00:18:23.084 [job0] 00:18:23.084 filename=/dev/nvme0n1 00:18:23.084 [job1] 00:18:23.084 filename=/dev/nvme0n2 00:18:23.084 [job2] 00:18:23.084 filename=/dev/nvme0n3 00:18:23.084 [job3] 00:18:23.084 filename=/dev/nvme0n4 00:18:23.084 Could not set queue depth (nvme0n1) 00:18:23.084 Could not set queue depth (nvme0n2) 00:18:23.084 Could not set queue depth (nvme0n3) 00:18:23.085 Could not set queue depth (nvme0n4) 00:18:23.085 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:23.085 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:23.085 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:23.085 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:23.085 fio-3.35 00:18:23.085 Starting 4 threads 00:18:24.453 00:18:24.453 job0: (groupid=0, jobs=1): err= 0: pid=1389465: Sat Jul 20 18:48:34 2024 00:18:24.453 read: IOPS=18, BW=73.8KiB/s (75.6kB/s)(76.0KiB/1030msec) 00:18:24.453 slat (nsec): min=12104, max=44209, avg=24146.21, stdev=10786.36 00:18:24.453 clat (usec): min=40864, max=41474, avg=40992.35, stdev=124.53 00:18:24.453 lat (usec): min=40908, max=41486, avg=41016.49, stdev=119.47 00:18:24.453 clat percentiles (usec): 00:18:24.453 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:24.453 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:24.453 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:18:24.453 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:24.453 | 99.99th=[41681] 00:18:24.453 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:18:24.453 slat (nsec): min=8034, max=40678, avg=17701.40, stdev=6481.45 00:18:24.453 clat (usec): min=393, max=624, avg=466.82, stdev=32.50 00:18:24.453 lat (usec): min=402, max=640, avg=484.52, stdev=34.21 00:18:24.453 clat percentiles (usec): 00:18:24.453 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 424], 20.00th=[ 445], 00:18:24.453 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 465], 60.00th=[ 474], 00:18:24.453 | 70.00th=[ 478], 80.00th=[ 490], 90.00th=[ 510], 95.00th=[ 529], 00:18:24.453 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[ 627], 99.95th=[ 627], 00:18:24.453 | 99.99th=[ 627] 00:18:24.453 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:18:24.453 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:24.453 lat (usec) : 500=83.43%, 750=12.99% 00:18:24.453 lat (msec) : 50=3.58% 00:18:24.453 cpu : usr=0.78%, sys=0.87%, ctx=531, majf=0, minf=1 00:18:24.453 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:24.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.453 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.453 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:24.453 job1: (groupid=0, jobs=1): err= 0: pid=1389466: Sat Jul 20 18:48:34 2024 00:18:24.453 read: IOPS=114, BW=459KiB/s (470kB/s)(468KiB/1020msec) 00:18:24.453 slat (nsec): min=6169, max=33291, avg=10580.36, stdev=6857.90 00:18:24.453 clat (usec): min=604, max=42026, avg=6918.28, stdev=14786.06 00:18:24.453 lat (usec): min=611, max=42039, avg=6928.86, stdev=14791.15 00:18:24.453 clat percentiles (usec): 00:18:24.453 | 1.00th=[ 611], 5.00th=[ 611], 10.00th=[ 619], 20.00th=[ 627], 00:18:24.453 | 30.00th=[ 627], 40.00th=[ 635], 50.00th=[ 644], 60.00th=[ 652], 00:18:24.453 | 70.00th=[ 660], 80.00th=[ 676], 90.00th=[41157], 95.00th=[42206], 00:18:24.453 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:24.453 | 99.99th=[42206] 00:18:24.453 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:18:24.453 slat (nsec): min=6740, max=40042, avg=14746.88, stdev=6151.13 00:18:24.453 clat (usec): min=303, max=686, avg=389.26, stdev=60.92 00:18:24.453 lat (usec): min=312, max=701, avg=404.01, stdev=60.73 00:18:24.453 clat percentiles (usec): 00:18:24.453 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 338], 00:18:24.453 | 30.00th=[ 351], 40.00th=[ 367], 50.00th=[ 383], 60.00th=[ 392], 00:18:24.453 | 70.00th=[ 404], 80.00th=[ 429], 90.00th=[ 465], 95.00th=[ 502], 00:18:24.453 | 99.00th=[ 619], 99.50th=[ 635], 99.90th=[ 685], 99.95th=[ 685], 00:18:24.453 | 99.99th=[ 685] 00:18:24.453 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:18:24.453 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:24.453 lat (usec) : 500=77.27%, 750=19.71%, 1000=0.16% 00:18:24.453 lat (msec) : 50=2.86% 00:18:24.453 cpu : usr=0.49%, sys=0.88%, ctx=629, majf=0, minf=1 00:18:24.453 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:24.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.453 issued rwts: total=117,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.453 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:24.453 job2: (groupid=0, jobs=1): err= 0: pid=1389467: Sat Jul 20 18:48:34 2024 00:18:24.453 read: IOPS=19, BW=77.1KiB/s (79.0kB/s)(80.0KiB/1037msec) 00:18:24.453 slat (nsec): min=12399, max=35893, avg=23041.05, stdev=10211.40 00:18:24.453 clat (usec): min=40914, max=42021, avg=41551.57, stdev=480.24 00:18:24.453 lat (usec): min=40932, max=42057, avg=41574.62, stdev=485.45 00:18:24.453 clat percentiles (usec): 00:18:24.453 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:24.453 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:18:24.453 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:24.453 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:24.453 | 99.99th=[42206] 00:18:24.453 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:18:24.453 slat (nsec): min=6620, max=41281, avg=14336.86, stdev=5458.95 00:18:24.453 clat (usec): min=308, max=553, avg=382.67, stdev=42.59 00:18:24.453 lat (usec): min=316, max=569, avg=397.01, stdev=44.10 00:18:24.453 clat percentiles (usec): 00:18:24.453 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 347], 00:18:24.454 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 379], 60.00th=[ 392], 00:18:24.454 | 70.00th=[ 400], 80.00th=[ 412], 90.00th=[ 437], 95.00th=[ 465], 00:18:24.454 | 99.00th=[ 498], 99.50th=[ 537], 99.90th=[ 553], 99.95th=[ 553], 00:18:24.454 | 99.99th=[ 553] 00:18:24.454 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:18:24.454 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:24.454 lat (usec) : 500=95.30%, 750=0.94% 00:18:24.454 lat (msec) : 50=3.76% 00:18:24.454 cpu : usr=0.39%, sys=0.58%, ctx=532, majf=0, minf=2 00:18:24.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:24.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.454 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:24.454 job3: (groupid=0, jobs=1): err= 0: pid=1389468: Sat Jul 20 18:48:34 2024 00:18:24.454 read: IOPS=803, BW=3213KiB/s (3290kB/s)(3216KiB/1001msec) 00:18:24.454 slat (nsec): min=6446, max=74588, avg=19054.48, stdev=11040.38 00:18:24.454 clat (usec): min=566, max=1022, avg=649.91, stdev=61.51 00:18:24.454 lat (usec): min=574, max=1036, avg=668.96, stdev=69.51 00:18:24.454 clat percentiles (usec): 00:18:24.454 | 1.00th=[ 578], 5.00th=[ 586], 10.00th=[ 594], 20.00th=[ 603], 00:18:24.454 | 30.00th=[ 611], 40.00th=[ 619], 50.00th=[ 627], 60.00th=[ 644], 00:18:24.454 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 775], 00:18:24.454 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 1020], 99.95th=[ 1020], 00:18:24.454 | 99.99th=[ 1020] 00:18:24.454 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:18:24.454 slat (nsec): min=8511, max=91639, avg=22240.85, stdev=12781.31 00:18:24.454 clat (usec): min=305, max=2866, avg=420.02, stdev=109.87 00:18:24.454 lat (usec): min=315, max=2884, avg=442.26, stdev=110.49 00:18:24.454 clat percentiles (usec): 00:18:24.454 | 1.00th=[ 310], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 338], 00:18:24.454 | 30.00th=[ 367], 40.00th=[ 392], 50.00th=[ 408], 60.00th=[ 429], 00:18:24.454 | 70.00th=[ 449], 80.00th=[ 486], 90.00th=[ 529], 95.00th=[ 553], 00:18:24.454 | 99.00th=[ 652], 99.50th=[ 734], 99.90th=[ 791], 99.95th=[ 2868], 00:18:24.454 | 99.99th=[ 2868] 00:18:24.454 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:18:24.454 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:24.454 lat (usec) : 500=46.77%, 750=49.73%, 1000=3.39% 00:18:24.454 lat (msec) : 2=0.05%, 4=0.05% 00:18:24.454 cpu : usr=2.90%, sys=4.50%, ctx=1829, majf=0, minf=1 00:18:24.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:24.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.454 issued rwts: total=804,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:24.454 00:18:24.454 Run status group 0 (all jobs): 00:18:24.454 READ: bw=3703KiB/s (3792kB/s), 73.8KiB/s-3213KiB/s (75.6kB/s-3290kB/s), io=3840KiB (3932kB), run=1001-1037msec 00:18:24.454 WRITE: bw=9875KiB/s (10.1MB/s), 1975KiB/s-4092KiB/s (2022kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1037msec 00:18:24.454 00:18:24.454 Disk stats (read/write): 00:18:24.454 nvme0n1: ios=64/512, merge=0/0, ticks=650/224, in_queue=874, util=88.08% 00:18:24.454 nvme0n2: ios=148/512, merge=0/0, ticks=874/201, in_queue=1075, util=92.87% 00:18:24.454 nvme0n3: ios=15/512, merge=0/0, ticks=622/191, in_queue=813, util=88.98% 00:18:24.454 nvme0n4: ios=622/1024, merge=0/0, ticks=1306/391, in_queue=1697, util=97.78% 00:18:24.454 18:48:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:24.454 [global] 00:18:24.454 thread=1 00:18:24.454 invalidate=1 00:18:24.454 rw=randwrite 00:18:24.454 time_based=1 00:18:24.454 runtime=1 00:18:24.454 ioengine=libaio 00:18:24.454 direct=1 00:18:24.454 bs=4096 00:18:24.454 iodepth=1 00:18:24.454 norandommap=0 00:18:24.454 numjobs=1 00:18:24.454 00:18:24.454 verify_dump=1 00:18:24.454 verify_backlog=512 00:18:24.454 verify_state_save=0 00:18:24.454 do_verify=1 00:18:24.454 verify=crc32c-intel 00:18:24.454 [job0] 00:18:24.454 filename=/dev/nvme0n1 00:18:24.454 [job1] 00:18:24.454 filename=/dev/nvme0n2 00:18:24.454 [job2] 00:18:24.454 filename=/dev/nvme0n3 00:18:24.454 [job3] 00:18:24.454 filename=/dev/nvme0n4 00:18:24.454 Could not set queue depth (nvme0n1) 00:18:24.454 Could not set queue depth (nvme0n2) 00:18:24.454 Could not set queue depth (nvme0n3) 00:18:24.454 Could not set queue depth (nvme0n4) 00:18:24.454 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.454 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.454 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.454 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.454 fio-3.35 00:18:24.454 Starting 4 threads 00:18:25.825 00:18:25.825 job0: (groupid=0, jobs=1): err= 0: pid=1389694: Sat Jul 20 18:48:35 2024 00:18:25.825 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:25.825 slat (nsec): min=5755, max=65681, avg=18047.34, stdev=10833.54 00:18:25.825 clat (usec): min=431, max=795, avg=504.32, stdev=56.36 00:18:25.825 lat (usec): min=437, max=822, avg=522.37, stdev=62.52 00:18:25.825 clat percentiles (usec): 00:18:25.825 | 1.00th=[ 441], 5.00th=[ 449], 10.00th=[ 457], 20.00th=[ 465], 00:18:25.825 | 30.00th=[ 474], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 502], 00:18:25.825 | 70.00th=[ 510], 80.00th=[ 523], 90.00th=[ 586], 95.00th=[ 644], 00:18:25.825 | 99.00th=[ 693], 99.50th=[ 725], 99.90th=[ 799], 99.95th=[ 799], 00:18:25.825 | 99.99th=[ 799] 00:18:25.825 write: IOPS=1028, BW=4116KiB/s (4215kB/s)(4120KiB/1001msec); 0 zone resets 00:18:25.825 slat (nsec): min=7256, max=88103, avg=18040.37, stdev=9978.83 00:18:25.825 clat (usec): min=300, max=1281, avg=423.54, stdev=98.97 00:18:25.825 lat (usec): min=309, max=1306, avg=441.58, stdev=100.61 00:18:25.825 clat percentiles (usec): 00:18:25.825 | 1.00th=[ 310], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 338], 00:18:25.825 | 30.00th=[ 363], 40.00th=[ 392], 50.00th=[ 408], 60.00th=[ 433], 00:18:25.825 | 70.00th=[ 453], 80.00th=[ 486], 90.00th=[ 537], 95.00th=[ 578], 00:18:25.825 | 99.00th=[ 766], 99.50th=[ 865], 99.90th=[ 1106], 99.95th=[ 1287], 00:18:25.825 | 99.99th=[ 1287] 00:18:25.825 bw ( KiB/s): min= 4096, max= 4096, per=34.63%, avg=4096.00, stdev= 0.00, samples=1 00:18:25.825 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:25.825 lat (usec) : 500=70.16%, 750=29.21%, 1000=0.44% 00:18:25.825 lat (msec) : 2=0.19% 00:18:25.825 cpu : usr=1.80%, sys=5.50%, ctx=2055, majf=0, minf=1 00:18:25.825 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.825 issued rwts: total=1024,1030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.825 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:25.825 job1: (groupid=0, jobs=1): err= 0: pid=1389695: Sat Jul 20 18:48:35 2024 00:18:25.825 read: IOPS=18, BW=75.5KiB/s (77.3kB/s)(76.0KiB/1007msec) 00:18:25.825 slat (nsec): min=12027, max=35741, avg=24465.63, stdev=9964.05 00:18:25.825 clat (usec): min=40885, max=42063, avg=41248.36, stdev=466.70 00:18:25.825 lat (usec): min=40897, max=42091, avg=41272.83, stdev=468.44 00:18:25.825 clat percentiles (usec): 00:18:25.825 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:25.825 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:25.825 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:18:25.825 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:25.825 | 99.99th=[42206] 00:18:25.825 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:18:25.825 slat (nsec): min=7581, max=37445, avg=16474.53, stdev=6707.08 00:18:25.825 clat (usec): min=310, max=581, avg=414.25, stdev=52.46 00:18:25.825 lat (usec): min=322, max=611, avg=430.73, stdev=53.37 00:18:25.825 clat percentiles (usec): 00:18:25.825 | 1.00th=[ 318], 5.00th=[ 330], 10.00th=[ 343], 20.00th=[ 363], 00:18:25.825 | 30.00th=[ 388], 40.00th=[ 400], 50.00th=[ 412], 60.00th=[ 429], 00:18:25.825 | 70.00th=[ 445], 80.00th=[ 461], 90.00th=[ 482], 95.00th=[ 498], 00:18:25.825 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[ 586], 99.95th=[ 586], 00:18:25.825 | 99.99th=[ 586] 00:18:25.825 bw ( KiB/s): min= 4096, max= 4096, per=34.63%, avg=4096.00, stdev= 0.00, samples=1 00:18:25.825 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:25.825 lat (usec) : 500=91.90%, 750=4.52% 00:18:25.825 lat (msec) : 50=3.58% 00:18:25.825 cpu : usr=0.89%, sys=0.80%, ctx=531, majf=0, minf=1 00:18:25.825 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.825 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.825 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:25.825 job2: (groupid=0, jobs=1): err= 0: pid=1389696: Sat Jul 20 18:48:35 2024 00:18:25.825 read: IOPS=62, BW=250KiB/s (256kB/s)(260KiB/1041msec) 00:18:25.825 slat (nsec): min=7640, max=33392, avg=16294.18, stdev=6905.90 00:18:25.825 clat (usec): min=582, max=42083, avg=12654.14, stdev=18553.49 00:18:25.825 lat (usec): min=596, max=42096, avg=12670.44, stdev=18557.42 00:18:25.825 clat percentiles (usec): 00:18:25.825 | 1.00th=[ 586], 5.00th=[ 603], 10.00th=[ 603], 20.00th=[ 619], 00:18:25.825 | 30.00th=[ 635], 40.00th=[ 652], 50.00th=[ 734], 60.00th=[ 824], 00:18:25.825 | 70.00th=[ 6915], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:18:25.825 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:25.825 | 99.99th=[42206] 00:18:25.825 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:18:25.825 slat (nsec): min=6512, max=42623, avg=14541.14, stdev=5359.88 00:18:25.825 clat (usec): min=298, max=661, avg=405.80, stdev=64.29 00:18:25.825 lat (usec): min=307, max=677, avg=420.34, stdev=65.94 00:18:25.825 clat percentiles (usec): 00:18:25.825 | 1.00th=[ 306], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 343], 00:18:25.825 | 30.00th=[ 363], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 416], 00:18:25.825 | 70.00th=[ 441], 80.00th=[ 465], 90.00th=[ 498], 95.00th=[ 519], 00:18:25.825 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 660], 99.95th=[ 660], 00:18:25.825 | 99.99th=[ 660] 00:18:25.826 bw ( KiB/s): min= 4096, max= 4096, per=34.63%, avg=4096.00, stdev= 0.00, samples=1 00:18:25.826 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:25.826 lat (usec) : 500=80.76%, 750=13.69%, 1000=1.56% 00:18:25.826 lat (msec) : 2=0.52%, 10=0.17%, 50=3.29% 00:18:25.826 cpu : usr=0.58%, sys=0.58%, ctx=578, majf=0, minf=1 00:18:25.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.826 issued rwts: total=65,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:25.826 job3: (groupid=0, jobs=1): err= 0: pid=1389697: Sat Jul 20 18:48:35 2024 00:18:25.826 read: IOPS=550, BW=2202KiB/s (2255kB/s)(2204KiB/1001msec) 00:18:25.826 slat (nsec): min=7467, max=64680, avg=17424.62, stdev=8821.37 00:18:25.826 clat (usec): min=568, max=26801, avg=857.66, stdev=1108.52 00:18:25.826 lat (usec): min=583, max=26816, avg=875.08, stdev=1108.54 00:18:25.826 clat percentiles (usec): 00:18:25.826 | 1.00th=[ 619], 5.00th=[ 717], 10.00th=[ 775], 20.00th=[ 791], 00:18:25.826 | 30.00th=[ 799], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 824], 00:18:25.826 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 857], 95.00th=[ 881], 00:18:25.826 | 99.00th=[ 955], 99.50th=[ 1045], 99.90th=[26870], 99.95th=[26870], 00:18:25.826 | 99.99th=[26870] 00:18:25.826 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:18:25.826 slat (nsec): min=8725, max=79628, avg=28361.59, stdev=12668.34 00:18:25.826 clat (usec): min=391, max=598, avg=467.64, stdev=29.70 00:18:25.826 lat (usec): min=407, max=614, avg=496.00, stdev=34.96 00:18:25.826 clat percentiles (usec): 00:18:25.826 | 1.00th=[ 400], 5.00th=[ 424], 10.00th=[ 433], 20.00th=[ 441], 00:18:25.826 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 465], 60.00th=[ 474], 00:18:25.826 | 70.00th=[ 486], 80.00th=[ 494], 90.00th=[ 498], 95.00th=[ 515], 00:18:25.826 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 586], 99.95th=[ 603], 00:18:25.826 | 99.99th=[ 603] 00:18:25.826 bw ( KiB/s): min= 4096, max= 4096, per=34.63%, avg=4096.00, stdev= 0.00, samples=1 00:18:25.826 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:25.826 lat (usec) : 500=59.11%, 750=7.94%, 1000=32.76% 00:18:25.826 lat (msec) : 2=0.13%, 50=0.06% 00:18:25.826 cpu : usr=3.10%, sys=2.80%, ctx=1576, majf=0, minf=2 00:18:25.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.826 issued rwts: total=551,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:25.826 00:18:25.826 Run status group 0 (all jobs): 00:18:25.826 READ: bw=6375KiB/s (6528kB/s), 75.5KiB/s-4092KiB/s (77.3kB/s-4190kB/s), io=6636KiB (6795kB), run=1001-1041msec 00:18:25.826 WRITE: bw=11.5MiB/s (12.1MB/s), 1967KiB/s-4116KiB/s (2015kB/s-4215kB/s), io=12.0MiB (12.6MB), run=1001-1041msec 00:18:25.826 00:18:25.826 Disk stats (read/write): 00:18:25.826 nvme0n1: ios=799/1024, merge=0/0, ticks=1375/407, in_queue=1782, util=97.90% 00:18:25.826 nvme0n2: ios=49/512, merge=0/0, ticks=705/196, in_queue=901, util=92.38% 00:18:25.826 nvme0n3: ios=60/512, merge=0/0, ticks=618/208, in_queue=826, util=89.04% 00:18:25.826 nvme0n4: ios=569/807, merge=0/0, ticks=816/370, in_queue=1186, util=98.21% 00:18:25.826 18:48:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:25.826 [global] 00:18:25.826 thread=1 00:18:25.826 invalidate=1 00:18:25.826 rw=write 00:18:25.826 time_based=1 00:18:25.826 runtime=1 00:18:25.826 ioengine=libaio 00:18:25.826 direct=1 00:18:25.826 bs=4096 00:18:25.826 iodepth=128 00:18:25.826 norandommap=0 00:18:25.826 numjobs=1 00:18:25.826 00:18:25.826 verify_dump=1 00:18:25.826 verify_backlog=512 00:18:25.826 verify_state_save=0 00:18:25.826 do_verify=1 00:18:25.826 verify=crc32c-intel 00:18:25.826 [job0] 00:18:25.826 filename=/dev/nvme0n1 00:18:25.826 [job1] 00:18:25.826 filename=/dev/nvme0n2 00:18:25.826 [job2] 00:18:25.826 filename=/dev/nvme0n3 00:18:25.826 [job3] 00:18:25.826 filename=/dev/nvme0n4 00:18:25.826 Could not set queue depth (nvme0n1) 00:18:25.826 Could not set queue depth (nvme0n2) 00:18:25.826 Could not set queue depth (nvme0n3) 00:18:25.826 Could not set queue depth (nvme0n4) 00:18:25.826 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:25.826 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:25.826 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:25.826 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:25.826 fio-3.35 00:18:25.826 Starting 4 threads 00:18:27.194 00:18:27.194 job0: (groupid=0, jobs=1): err= 0: pid=1389925: Sat Jul 20 18:48:37 2024 00:18:27.194 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:18:27.194 slat (usec): min=2, max=21146, avg=115.38, stdev=707.12 00:18:27.194 clat (usec): min=5352, max=66844, avg=14355.24, stdev=6728.14 00:18:27.194 lat (usec): min=5356, max=66849, avg=14470.62, stdev=6773.61 00:18:27.194 clat percentiles (usec): 00:18:27.194 | 1.00th=[ 6652], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[11994], 00:18:27.194 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13304], 60.00th=[13435], 00:18:27.194 | 70.00th=[13566], 80.00th=[14353], 90.00th=[17695], 95.00th=[22938], 00:18:27.194 | 99.00th=[45876], 99.50th=[49021], 99.90th=[66847], 99.95th=[66847], 00:18:27.194 | 99.99th=[66847] 00:18:27.194 write: IOPS=3500, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1002msec); 0 zone resets 00:18:27.194 slat (usec): min=3, max=12925, avg=178.72, stdev=870.50 00:18:27.194 clat (usec): min=624, max=97195, avg=23524.22, stdev=12752.51 00:18:27.194 lat (usec): min=5946, max=97208, avg=23702.94, stdev=12834.23 00:18:27.194 clat percentiles (usec): 00:18:27.194 | 1.00th=[ 6259], 5.00th=[11863], 10.00th=[14746], 20.00th=[16712], 00:18:27.194 | 30.00th=[17433], 40.00th=[18220], 50.00th=[19268], 60.00th=[20317], 00:18:27.194 | 70.00th=[25560], 80.00th=[29754], 90.00th=[33162], 95.00th=[47449], 00:18:27.194 | 99.00th=[85459], 99.50th=[85459], 99.90th=[96994], 99.95th=[96994], 00:18:27.194 | 99.99th=[96994] 00:18:27.194 bw ( KiB/s): min=15120, max=15120, per=33.13%, avg=15120.00, stdev= 0.00, samples=1 00:18:27.194 iops : min= 3780, max= 3780, avg=3780.00, stdev= 0.00, samples=1 00:18:27.194 lat (usec) : 750=0.02% 00:18:27.194 lat (msec) : 10=6.26%, 20=67.78%, 50=23.21%, 100=2.74% 00:18:27.194 cpu : usr=2.70%, sys=3.00%, ctx=513, majf=0, minf=11 00:18:27.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:27.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:27.194 issued rwts: total=3072,3508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:27.194 job1: (groupid=0, jobs=1): err= 0: pid=1389932: Sat Jul 20 18:48:37 2024 00:18:27.194 read: IOPS=2398, BW=9592KiB/s (9823kB/s)(9.79MiB/1045msec) 00:18:27.194 slat (usec): min=2, max=11489, avg=166.41, stdev=979.48 00:18:27.194 clat (usec): min=10106, max=54748, avg=23138.89, stdev=11142.05 00:18:27.194 lat (usec): min=10138, max=56411, avg=23305.31, stdev=11199.07 00:18:27.194 clat percentiles (usec): 00:18:27.194 | 1.00th=[10552], 5.00th=[12387], 10.00th=[13566], 20.00th=[14484], 00:18:27.194 | 30.00th=[15664], 40.00th=[17171], 50.00th=[18744], 60.00th=[20579], 00:18:27.194 | 70.00th=[25035], 80.00th=[33424], 90.00th=[42206], 95.00th=[45876], 00:18:27.194 | 99.00th=[54789], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:18:27.194 | 99.99th=[54789] 00:18:27.194 write: IOPS=2449, BW=9799KiB/s (10.0MB/s)(10.0MiB/1045msec); 0 zone resets 00:18:27.194 slat (usec): min=3, max=12662, avg=223.56, stdev=1091.50 00:18:27.194 clat (usec): min=6494, max=86329, avg=28946.55, stdev=13692.52 00:18:27.194 lat (usec): min=6499, max=86335, avg=29170.10, stdev=13758.52 00:18:27.194 clat percentiles (usec): 00:18:27.194 | 1.00th=[ 6783], 5.00th=[11207], 10.00th=[14091], 20.00th=[19792], 00:18:27.194 | 30.00th=[22676], 40.00th=[24511], 50.00th=[26084], 60.00th=[27657], 00:18:27.194 | 70.00th=[31327], 80.00th=[37487], 90.00th=[44827], 95.00th=[53740], 00:18:27.194 | 99.00th=[82314], 99.50th=[82314], 99.90th=[86508], 99.95th=[86508], 00:18:27.194 | 99.99th=[86508] 00:18:27.194 bw ( KiB/s): min= 8432, max=12048, per=22.44%, avg=10240.00, stdev=2556.90, samples=2 00:18:27.194 iops : min= 2108, max= 3012, avg=2560.00, stdev=639.22, samples=2 00:18:27.194 lat (msec) : 10=1.46%, 20=35.37%, 50=57.72%, 100=5.45% 00:18:27.194 cpu : usr=1.72%, sys=2.11%, ctx=340, majf=0, minf=11 00:18:27.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:27.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:27.194 issued rwts: total=2506,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:27.194 job2: (groupid=0, jobs=1): err= 0: pid=1389955: Sat Jul 20 18:48:37 2024 00:18:27.194 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:18:27.194 slat (usec): min=2, max=30956, avg=179.81, stdev=1161.80 00:18:27.194 clat (usec): min=10371, max=69498, avg=22245.49, stdev=8907.61 00:18:27.194 lat (usec): min=10382, max=69514, avg=22425.30, stdev=8987.23 00:18:27.194 clat percentiles (usec): 00:18:27.194 | 1.00th=[10945], 5.00th=[12387], 10.00th=[13173], 20.00th=[14746], 00:18:27.194 | 30.00th=[17957], 40.00th=[19792], 50.00th=[20841], 60.00th=[22152], 00:18:27.194 | 70.00th=[24773], 80.00th=[26608], 90.00th=[30540], 95.00th=[38011], 00:18:27.194 | 99.00th=[63701], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:18:27.194 | 99.99th=[69731] 00:18:27.194 write: IOPS=3254, BW=12.7MiB/s (13.3MB/s)(12.9MiB/1012msec); 0 zone resets 00:18:27.194 slat (usec): min=3, max=8841, avg=130.03, stdev=577.83 00:18:27.194 clat (usec): min=1616, max=40595, avg=18247.65, stdev=6561.88 00:18:27.194 lat (usec): min=1624, max=40600, avg=18377.67, stdev=6592.15 00:18:27.194 clat percentiles (usec): 00:18:27.194 | 1.00th=[ 3818], 5.00th=[11076], 10.00th=[12387], 20.00th=[14091], 00:18:27.194 | 30.00th=[15008], 40.00th=[15664], 50.00th=[16450], 60.00th=[17433], 00:18:27.194 | 70.00th=[19006], 80.00th=[21103], 90.00th=[29492], 95.00th=[32900], 00:18:27.194 | 99.00th=[36963], 99.50th=[38011], 99.90th=[40633], 99.95th=[40633], 00:18:27.194 | 99.99th=[40633] 00:18:27.194 bw ( KiB/s): min= 9632, max=15704, per=27.76%, avg=12668.00, stdev=4293.55, samples=2 00:18:27.194 iops : min= 2408, max= 3926, avg=3167.00, stdev=1073.39, samples=2 00:18:27.194 lat (msec) : 2=0.11%, 4=0.52%, 10=1.04%, 20=56.83%, 50=40.17% 00:18:27.194 lat (msec) : 100=1.34% 00:18:27.194 cpu : usr=1.68%, sys=3.26%, ctx=447, majf=0, minf=15 00:18:27.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:27.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:27.195 issued rwts: total=3072,3294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:27.195 job3: (groupid=0, jobs=1): err= 0: pid=1389967: Sat Jul 20 18:48:37 2024 00:18:27.195 read: IOPS=2346, BW=9385KiB/s (9610kB/s)(9432KiB/1005msec) 00:18:27.195 slat (usec): min=3, max=10105, avg=129.13, stdev=645.50 00:18:27.195 clat (usec): min=1732, max=37023, avg=13909.86, stdev=4487.48 00:18:27.195 lat (usec): min=5531, max=37032, avg=14038.99, stdev=4556.75 00:18:27.195 clat percentiles (usec): 00:18:27.195 | 1.00th=[ 5800], 5.00th=[ 9372], 10.00th=[10552], 20.00th=[11207], 00:18:27.195 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12911], 00:18:27.195 | 70.00th=[13960], 80.00th=[15533], 90.00th=[19792], 95.00th=[23987], 00:18:27.195 | 99.00th=[30540], 99.50th=[33817], 99.90th=[34866], 99.95th=[36439], 00:18:27.195 | 99.99th=[36963] 00:18:27.195 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:18:27.195 slat (usec): min=4, max=16672, avg=264.93, stdev=813.74 00:18:27.195 clat (usec): min=8029, max=54743, avg=36345.28, stdev=8871.43 00:18:27.195 lat (usec): min=8035, max=54751, avg=36610.21, stdev=8947.78 00:18:27.195 clat percentiles (usec): 00:18:27.195 | 1.00th=[12911], 5.00th=[16909], 10.00th=[21365], 20.00th=[28967], 00:18:27.195 | 30.00th=[33424], 40.00th=[37487], 50.00th=[39060], 60.00th=[41157], 00:18:27.195 | 70.00th=[42206], 80.00th=[43779], 90.00th=[44827], 95.00th=[44827], 00:18:27.195 | 99.00th=[47449], 99.50th=[50070], 99.90th=[52691], 99.95th=[54789], 00:18:27.195 | 99.99th=[54789] 00:18:27.195 bw ( KiB/s): min=10176, max=10304, per=22.44%, avg=10240.00, stdev=90.51, samples=2 00:18:27.195 iops : min= 2544, max= 2576, avg=2560.00, stdev=22.63, samples=2 00:18:27.195 lat (msec) : 2=0.02%, 10=3.09%, 20=44.77%, 50=51.81%, 100=0.31% 00:18:27.195 cpu : usr=2.59%, sys=4.38%, ctx=474, majf=0, minf=13 00:18:27.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:27.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:27.195 issued rwts: total=2358,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:27.195 00:18:27.195 Run status group 0 (all jobs): 00:18:27.195 READ: bw=41.1MiB/s (43.1MB/s), 9385KiB/s-12.0MiB/s (9610kB/s-12.6MB/s), io=43.0MiB (45.1MB), run=1002-1045msec 00:18:27.195 WRITE: bw=44.6MiB/s (46.7MB/s), 9799KiB/s-13.7MiB/s (10.0MB/s-14.3MB/s), io=46.6MiB (48.8MB), run=1002-1045msec 00:18:27.195 00:18:27.195 Disk stats (read/write): 00:18:27.195 nvme0n1: ios=2600/2970, merge=0/0, ticks=12875/26136, in_queue=39011, util=97.90% 00:18:27.195 nvme0n2: ios=1993/2048, merge=0/0, ticks=23058/29589, in_queue=52647, util=91.56% 00:18:27.195 nvme0n3: ios=2560/2983, merge=0/0, ticks=18043/18064, in_queue=36107, util=88.69% 00:18:27.195 nvme0n4: ios=2084/2181, merge=0/0, ticks=15169/35198, in_queue=50367, util=98.94% 00:18:27.195 18:48:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:27.195 [global] 00:18:27.195 thread=1 00:18:27.195 invalidate=1 00:18:27.195 rw=randwrite 00:18:27.195 time_based=1 00:18:27.195 runtime=1 00:18:27.195 ioengine=libaio 00:18:27.195 direct=1 00:18:27.195 bs=4096 00:18:27.195 iodepth=128 00:18:27.195 norandommap=0 00:18:27.195 numjobs=1 00:18:27.195 00:18:27.195 verify_dump=1 00:18:27.195 verify_backlog=512 00:18:27.195 verify_state_save=0 00:18:27.195 do_verify=1 00:18:27.195 verify=crc32c-intel 00:18:27.195 [job0] 00:18:27.195 filename=/dev/nvme0n1 00:18:27.195 [job1] 00:18:27.195 filename=/dev/nvme0n2 00:18:27.195 [job2] 00:18:27.195 filename=/dev/nvme0n3 00:18:27.195 [job3] 00:18:27.195 filename=/dev/nvme0n4 00:18:27.195 Could not set queue depth (nvme0n1) 00:18:27.195 Could not set queue depth (nvme0n2) 00:18:27.195 Could not set queue depth (nvme0n3) 00:18:27.195 Could not set queue depth (nvme0n4) 00:18:27.451 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:27.451 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:27.451 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:27.451 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:27.451 fio-3.35 00:18:27.451 Starting 4 threads 00:18:28.845 00:18:28.845 job0: (groupid=0, jobs=1): err= 0: pid=1390279: Sat Jul 20 18:48:38 2024 00:18:28.845 read: IOPS=2503, BW=9.78MiB/s (10.3MB/s)(10.1MiB/1032msec) 00:18:28.845 slat (usec): min=2, max=17423, avg=134.35, stdev=828.05 00:18:28.845 clat (usec): min=3206, max=71776, avg=17086.81, stdev=9252.83 00:18:28.845 lat (usec): min=3214, max=71780, avg=17221.16, stdev=9283.76 00:18:28.845 clat percentiles (usec): 00:18:28.845 | 1.00th=[ 5080], 5.00th=[ 7373], 10.00th=[ 9765], 20.00th=[10683], 00:18:28.845 | 30.00th=[11731], 40.00th=[13042], 50.00th=[14615], 60.00th=[15795], 00:18:28.845 | 70.00th=[18744], 80.00th=[21103], 90.00th=[29492], 95.00th=[39584], 00:18:28.845 | 99.00th=[53740], 99.50th=[57410], 99.90th=[61604], 99.95th=[61604], 00:18:28.845 | 99.99th=[71828] 00:18:28.845 write: IOPS=2976, BW=11.6MiB/s (12.2MB/s)(12.0MiB/1032msec); 0 zone resets 00:18:28.845 slat (usec): min=3, max=10404, avg=207.05, stdev=829.17 00:18:28.845 clat (usec): min=1906, max=69408, avg=28253.91, stdev=12923.08 00:18:28.845 lat (usec): min=1919, max=71688, avg=28460.96, stdev=12993.28 00:18:28.845 clat percentiles (usec): 00:18:28.845 | 1.00th=[ 3392], 5.00th=[ 9896], 10.00th=[13566], 20.00th=[17695], 00:18:28.845 | 30.00th=[20579], 40.00th=[23462], 50.00th=[26870], 60.00th=[30540], 00:18:28.845 | 70.00th=[33424], 80.00th=[37487], 90.00th=[45876], 95.00th=[51119], 00:18:28.845 | 99.00th=[64750], 99.50th=[67634], 99.90th=[69731], 99.95th=[69731], 00:18:28.845 | 99.99th=[69731] 00:18:28.845 bw ( KiB/s): min=10856, max=12888, per=30.08%, avg=11872.00, stdev=1436.84, samples=2 00:18:28.845 iops : min= 2714, max= 3222, avg=2968.00, stdev=359.21, samples=2 00:18:28.845 lat (msec) : 2=0.21%, 4=0.57%, 10=8.47%, 20=41.23%, 50=45.72% 00:18:28.845 lat (msec) : 100=3.80% 00:18:28.845 cpu : usr=1.75%, sys=3.20%, ctx=587, majf=0, minf=17 00:18:28.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:28.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:28.845 issued rwts: total=2584,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.845 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:28.845 job1: (groupid=0, jobs=1): err= 0: pid=1390280: Sat Jul 20 18:48:38 2024 00:18:28.845 read: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec) 00:18:28.845 slat (usec): min=2, max=174714, avg=317.51, stdev=5090.08 00:18:28.845 clat (msec): min=8, max=207, avg=34.84, stdev=46.04 00:18:28.845 lat (msec): min=8, max=207, avg=35.16, stdev=46.32 00:18:28.845 clat percentiles (msec): 00:18:28.845 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:18:28.845 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 18], 00:18:28.845 | 70.00th=[ 22], 80.00th=[ 37], 90.00th=[ 101], 95.00th=[ 165], 00:18:28.845 | 99.00th=[ 188], 99.50th=[ 188], 99.90th=[ 209], 99.95th=[ 209], 00:18:28.845 | 99.99th=[ 209] 00:18:28.845 write: IOPS=1983, BW=7932KiB/s (8123kB/s)(7964KiB/1004msec); 0 zone resets 00:18:28.845 slat (usec): min=3, max=138958, avg=238.72, stdev=3608.49 00:18:28.845 clat (usec): min=1797, max=207844, avg=37079.00, stdev=46248.49 00:18:28.845 lat (msec): min=6, max=207, avg=37.32, stdev=46.35 00:18:28.845 clat percentiles (msec): 00:18:28.845 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:18:28.845 | 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 21], 60.00th=[ 24], 00:18:28.845 | 70.00th=[ 27], 80.00th=[ 35], 90.00th=[ 84], 95.00th=[ 157], 00:18:28.845 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 205], 99.95th=[ 209], 00:18:28.845 | 99.99th=[ 209] 00:18:28.845 bw ( KiB/s): min= 5928, max= 8976, per=18.88%, avg=7452.00, stdev=2155.26, samples=2 00:18:28.845 iops : min= 1482, max= 2244, avg=1863.00, stdev=538.82, samples=2 00:18:28.846 lat (msec) : 2=0.03%, 10=5.30%, 20=50.92%, 50=27.45%, 100=6.63% 00:18:28.846 lat (msec) : 250=9.67% 00:18:28.846 cpu : usr=1.20%, sys=1.79%, ctx=214, majf=0, minf=11 00:18:28.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:18:28.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:28.846 issued rwts: total=1536,1991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:28.846 job2: (groupid=0, jobs=1): err= 0: pid=1390281: Sat Jul 20 18:48:38 2024 00:18:28.846 read: IOPS=2351, BW=9408KiB/s (9633kB/s)(9464KiB/1006msec) 00:18:28.846 slat (usec): min=3, max=40804, avg=182.37, stdev=1321.71 00:18:28.846 clat (usec): min=3919, max=71093, avg=19943.26, stdev=13993.60 00:18:28.846 lat (usec): min=7147, max=71100, avg=20125.63, stdev=14081.41 00:18:28.846 clat percentiles (usec): 00:18:28.846 | 1.00th=[ 7242], 5.00th=[ 7570], 10.00th=[ 8979], 20.00th=[10814], 00:18:28.846 | 30.00th=[12125], 40.00th=[13304], 50.00th=[14746], 60.00th=[16712], 00:18:28.846 | 70.00th=[20055], 80.00th=[25035], 90.00th=[46400], 95.00th=[53740], 00:18:28.846 | 99.00th=[64226], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:18:28.846 | 99.99th=[70779] 00:18:28.846 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:18:28.846 slat (usec): min=4, max=17524, avg=214.46, stdev=1133.83 00:18:28.846 clat (usec): min=1169, max=72240, avg=31426.33, stdev=17128.46 00:18:28.846 lat (usec): min=1189, max=72255, avg=31640.79, stdev=17219.66 00:18:28.846 clat percentiles (usec): 00:18:28.846 | 1.00th=[ 4686], 5.00th=[ 8160], 10.00th=[10159], 20.00th=[13173], 00:18:28.846 | 30.00th=[18482], 40.00th=[26608], 50.00th=[31065], 60.00th=[33817], 00:18:28.846 | 70.00th=[40109], 80.00th=[46924], 90.00th=[55313], 95.00th=[61604], 00:18:28.846 | 99.00th=[68682], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:18:28.846 | 99.99th=[71828] 00:18:28.846 bw ( KiB/s): min= 8848, max=11632, per=25.94%, avg=10240.00, stdev=1968.59, samples=2 00:18:28.846 iops : min= 2212, max= 2908, avg=2560.00, stdev=492.15, samples=2 00:18:28.846 lat (msec) : 2=0.04%, 4=0.28%, 10=13.20%, 20=37.11%, 50=35.65% 00:18:28.846 lat (msec) : 100=13.72% 00:18:28.846 cpu : usr=2.29%, sys=4.78%, ctx=307, majf=0, minf=7 00:18:28.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:18:28.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:28.846 issued rwts: total=2366,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:28.846 job3: (groupid=0, jobs=1): err= 0: pid=1390282: Sat Jul 20 18:48:38 2024 00:18:28.846 read: IOPS=2275, BW=9103KiB/s (9321kB/s)(9212KiB/1012msec) 00:18:28.846 slat (usec): min=2, max=28993, avg=191.20, stdev=1268.14 00:18:28.846 clat (usec): min=2373, max=58383, avg=23015.84, stdev=11537.73 00:18:28.846 lat (usec): min=2748, max=58397, avg=23207.03, stdev=11588.10 00:18:28.846 clat percentiles (usec): 00:18:28.846 | 1.00th=[ 5800], 5.00th=[10028], 10.00th=[11994], 20.00th=[13435], 00:18:28.846 | 30.00th=[14746], 40.00th=[16319], 50.00th=[17957], 60.00th=[21627], 00:18:28.846 | 70.00th=[27657], 80.00th=[36963], 90.00th=[42206], 95.00th=[43779], 00:18:28.846 | 99.00th=[44827], 99.50th=[46400], 99.90th=[50594], 99.95th=[52167], 00:18:28.846 | 99.99th=[58459] 00:18:28.846 write: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec); 0 zone resets 00:18:28.846 slat (usec): min=3, max=13571, avg=196.37, stdev=886.04 00:18:28.846 clat (usec): min=3810, max=83851, avg=29118.33, stdev=13866.74 00:18:28.846 lat (usec): min=3816, max=83856, avg=29314.70, stdev=13948.63 00:18:28.846 clat percentiles (usec): 00:18:28.846 | 1.00th=[ 4752], 5.00th=[11994], 10.00th=[13435], 20.00th=[17957], 00:18:28.846 | 30.00th=[21103], 40.00th=[23462], 50.00th=[26870], 60.00th=[31589], 00:18:28.846 | 70.00th=[34866], 80.00th=[38011], 90.00th=[43254], 95.00th=[53216], 00:18:28.846 | 99.00th=[78119], 99.50th=[80217], 99.90th=[82314], 99.95th=[83362], 00:18:28.846 | 99.99th=[83362] 00:18:28.846 bw ( KiB/s): min= 9016, max=11464, per=25.94%, avg=10240.00, stdev=1731.00, samples=2 00:18:28.846 iops : min= 2254, max= 2866, avg=2560.00, stdev=432.75, samples=2 00:18:28.846 lat (msec) : 4=0.80%, 10=1.91%, 20=37.88%, 50=55.97%, 100=3.43% 00:18:28.846 cpu : usr=1.29%, sys=2.97%, ctx=432, majf=0, minf=15 00:18:28.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:28.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:28.846 issued rwts: total=2303,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:28.846 00:18:28.846 Run status group 0 (all jobs): 00:18:28.846 READ: bw=33.3MiB/s (34.9MB/s), 6120KiB/s-9.78MiB/s (6266kB/s-10.3MB/s), io=34.3MiB (36.0MB), run=1004-1032msec 00:18:28.846 WRITE: bw=38.5MiB/s (40.4MB/s), 7932KiB/s-11.6MiB/s (8123kB/s-12.2MB/s), io=39.8MiB (41.7MB), run=1004-1032msec 00:18:28.846 00:18:28.846 Disk stats (read/write): 00:18:28.846 nvme0n1: ios=2120/2560, merge=0/0, ticks=14831/28010, in_queue=42841, util=99.30% 00:18:28.846 nvme0n2: ios=1073/1474, merge=0/0, ticks=27346/47284, in_queue=74630, util=91.07% 00:18:28.846 nvme0n3: ios=1701/2048, merge=0/0, ticks=35738/69482, in_queue=105220, util=94.06% 00:18:28.846 nvme0n4: ios=1868/2048, merge=0/0, ticks=21757/28087, in_queue=49844, util=97.90% 00:18:28.846 18:48:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:28.846 18:48:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1390420 00:18:28.846 18:48:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:28.846 18:48:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:28.846 [global] 00:18:28.846 thread=1 00:18:28.846 invalidate=1 00:18:28.846 rw=read 00:18:28.846 time_based=1 00:18:28.846 runtime=10 00:18:28.846 ioengine=libaio 00:18:28.846 direct=1 00:18:28.846 bs=4096 00:18:28.846 iodepth=1 00:18:28.846 norandommap=1 00:18:28.846 numjobs=1 00:18:28.846 00:18:28.846 [job0] 00:18:28.846 filename=/dev/nvme0n1 00:18:28.846 [job1] 00:18:28.846 filename=/dev/nvme0n2 00:18:28.846 [job2] 00:18:28.846 filename=/dev/nvme0n3 00:18:28.846 [job3] 00:18:28.846 filename=/dev/nvme0n4 00:18:28.846 Could not set queue depth (nvme0n1) 00:18:28.846 Could not set queue depth (nvme0n2) 00:18:28.846 Could not set queue depth (nvme0n3) 00:18:28.846 Could not set queue depth (nvme0n4) 00:18:28.846 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:28.846 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:28.846 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:28.846 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:28.846 fio-3.35 00:18:28.846 Starting 4 threads 00:18:32.184 18:48:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:32.184 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=5939200, buflen=4096 00:18:32.184 fio: pid=1390511, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:32.184 18:48:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:32.184 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=11784192, buflen=4096 00:18:32.184 fio: pid=1390510, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:32.184 18:48:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:32.184 18:48:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:32.442 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=18440192, buflen=4096 00:18:32.442 fio: pid=1390508, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:32.442 18:48:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:32.442 18:48:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:32.702 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11264000, buflen=4096 00:18:32.702 fio: pid=1390509, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:32.702 18:48:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:32.702 18:48:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:32.702 00:18:32.702 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1390508: Sat Jul 20 18:48:42 2024 00:18:32.702 read: IOPS=1340, BW=5363KiB/s (5491kB/s)(17.6MiB/3358msec) 00:18:32.702 slat (usec): min=5, max=15809, avg=27.40, stdev=359.42 00:18:32.702 clat (usec): min=422, max=42555, avg=714.03, stdev=1845.31 00:18:32.702 lat (usec): min=435, max=42578, avg=741.44, stdev=1879.95 00:18:32.702 clat percentiles (usec): 00:18:32.702 | 1.00th=[ 461], 5.00th=[ 486], 10.00th=[ 519], 20.00th=[ 603], 00:18:32.702 | 30.00th=[ 611], 40.00th=[ 619], 50.00th=[ 627], 60.00th=[ 644], 00:18:32.702 | 70.00th=[ 660], 80.00th=[ 685], 90.00th=[ 717], 95.00th=[ 750], 00:18:32.702 | 99.00th=[ 857], 99.50th=[ 996], 99.90th=[42206], 99.95th=[42206], 00:18:32.702 | 99.99th=[42730] 00:18:32.702 bw ( KiB/s): min= 2472, max= 6248, per=41.46%, avg=5292.00, stdev=1419.95, samples=6 00:18:32.702 iops : min= 618, max= 1562, avg=1323.00, stdev=354.99, samples=6 00:18:32.702 lat (usec) : 500=7.28%, 750=87.63%, 1000=4.60% 00:18:32.702 lat (msec) : 2=0.24%, 4=0.02%, 50=0.20% 00:18:32.702 cpu : usr=1.61%, sys=2.83%, ctx=4508, majf=0, minf=1 00:18:32.702 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:32.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.702 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.702 issued rwts: total=4503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.702 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:32.702 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1390509: Sat Jul 20 18:48:42 2024 00:18:32.702 read: IOPS=758, BW=3031KiB/s (3104kB/s)(10.7MiB/3629msec) 00:18:32.702 slat (usec): min=8, max=11836, avg=29.81, stdev=315.53 00:18:32.702 clat (usec): min=424, max=42738, avg=1284.13, stdev=3791.14 00:18:32.702 lat (usec): min=436, max=52984, avg=1313.94, stdev=3891.93 00:18:32.702 clat percentiles (usec): 00:18:32.702 | 1.00th=[ 478], 5.00th=[ 594], 10.00th=[ 644], 20.00th=[ 668], 00:18:32.702 | 30.00th=[ 701], 40.00th=[ 840], 50.00th=[ 898], 60.00th=[ 1012], 00:18:32.702 | 70.00th=[ 1139], 80.00th=[ 1221], 90.00th=[ 1254], 95.00th=[ 1319], 00:18:32.702 | 99.00th=[ 1631], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:32.702 | 99.99th=[42730] 00:18:32.702 bw ( KiB/s): min= 92, max= 4920, per=24.59%, avg=3138.86, stdev=1943.22, samples=7 00:18:32.702 iops : min= 23, max= 1230, avg=784.71, stdev=485.81, samples=7 00:18:32.702 lat (usec) : 500=2.36%, 750=33.26%, 1000=23.45% 00:18:32.702 lat (msec) : 2=39.99%, 4=0.04%, 50=0.87% 00:18:32.702 cpu : usr=0.72%, sys=1.85%, ctx=2755, majf=0, minf=1 00:18:32.702 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:32.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.702 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.702 issued rwts: total=2751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.702 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:32.702 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1390510: Sat Jul 20 18:48:42 2024 00:18:32.702 read: IOPS=928, BW=3711KiB/s (3800kB/s)(11.2MiB/3101msec) 00:18:32.702 slat (nsec): min=5778, max=74820, avg=19705.26, stdev=9890.43 00:18:32.702 clat (usec): min=429, max=41994, avg=1053.02, stdev=3057.12 00:18:32.702 lat (usec): min=440, max=42013, avg=1072.73, stdev=3057.14 00:18:32.702 clat percentiles (usec): 00:18:32.702 | 1.00th=[ 553], 5.00th=[ 619], 10.00th=[ 627], 20.00th=[ 652], 00:18:32.702 | 30.00th=[ 660], 40.00th=[ 676], 50.00th=[ 685], 60.00th=[ 742], 00:18:32.702 | 70.00th=[ 857], 80.00th=[ 1090], 90.00th=[ 1237], 95.00th=[ 1254], 00:18:32.702 | 99.00th=[ 1483], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:32.702 | 99.99th=[42206] 00:18:32.702 bw ( KiB/s): min= 104, max= 5688, per=28.91%, avg=3690.67, stdev=2015.77, samples=6 00:18:32.702 iops : min= 26, max= 1422, avg=922.67, stdev=503.94, samples=6 00:18:32.702 lat (usec) : 500=0.73%, 750=60.49%, 1000=12.47% 00:18:32.702 lat (msec) : 2=25.64%, 4=0.03%, 50=0.59% 00:18:32.702 cpu : usr=1.19%, sys=2.61%, ctx=2878, majf=0, minf=1 00:18:32.702 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:32.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.702 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.702 issued rwts: total=2878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.702 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:32.702 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1390511: Sat Jul 20 18:48:42 2024 00:18:32.702 read: IOPS=510, BW=2042KiB/s (2091kB/s)(5800KiB/2840msec) 00:18:32.702 slat (nsec): min=5712, max=68933, avg=18495.12, stdev=7845.89 00:18:32.702 clat (usec): min=537, max=42233, avg=1935.01, stdev=7197.02 00:18:32.702 lat (usec): min=552, max=42266, avg=1953.51, stdev=7197.75 00:18:32.703 clat percentiles (usec): 00:18:32.703 | 1.00th=[ 553], 5.00th=[ 570], 10.00th=[ 578], 20.00th=[ 586], 00:18:32.703 | 30.00th=[ 603], 40.00th=[ 611], 50.00th=[ 611], 60.00th=[ 619], 00:18:32.703 | 70.00th=[ 627], 80.00th=[ 652], 90.00th=[ 685], 95.00th=[ 1020], 00:18:32.703 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:32.703 | 99.99th=[42206] 00:18:32.703 bw ( KiB/s): min= 160, max= 4256, per=11.48%, avg=1465.60, stdev=1669.31, samples=5 00:18:32.703 iops : min= 40, max= 1064, avg=366.40, stdev=417.33, samples=5 00:18:32.703 lat (usec) : 750=92.35%, 1000=1.72% 00:18:32.703 lat (msec) : 2=2.69%, 50=3.17% 00:18:32.703 cpu : usr=0.21%, sys=1.23%, ctx=1451, majf=0, minf=1 00:18:32.703 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:32.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.703 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.703 issued rwts: total=1451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.703 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:32.703 00:18:32.703 Run status group 0 (all jobs): 00:18:32.703 READ: bw=12.5MiB/s (13.1MB/s), 2042KiB/s-5363KiB/s (2091kB/s-5491kB/s), io=45.2MiB (47.4MB), run=2840-3629msec 00:18:32.703 00:18:32.703 Disk stats (read/write): 00:18:32.703 nvme0n1: ios=4420/0, merge=0/0, ticks=3106/0, in_queue=3106, util=93.54% 00:18:32.703 nvme0n2: ios=2748/0, merge=0/0, ticks=3412/0, in_queue=3412, util=95.19% 00:18:32.703 nvme0n3: ios=2830/0, merge=0/0, ticks=2946/0, in_queue=2946, util=96.54% 00:18:32.703 nvme0n4: ios=1410/0, merge=0/0, ticks=2753/0, in_queue=2753, util=96.70% 00:18:32.960 18:48:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:32.960 18:48:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:33.218 18:48:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:33.218 18:48:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:33.476 18:48:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:33.476 18:48:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:33.734 18:48:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:33.734 18:48:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:33.991 18:48:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:33.991 18:48:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1390420 00:18:33.991 18:48:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:33.991 18:48:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:33.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:33.991 18:48:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:33.991 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:18:33.991 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:33.991 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:33.991 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:33.991 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:33.991 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:18:33.991 18:48:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:33.991 18:48:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:33.991 nvmf hotplug test: fio failed as expected 00:18:33.991 18:48:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:34.248 18:48:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:34.248 18:48:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:34.248 18:48:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:34.248 18:48:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:34.248 18:48:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:34.248 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:34.248 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:34.248 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:34.248 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:34.248 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:34.248 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:34.248 rmmod nvme_tcp 00:18:34.248 rmmod nvme_fabrics 00:18:34.505 rmmod nvme_keyring 00:18:34.505 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:34.505 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:34.505 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:34.505 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1388394 ']' 00:18:34.505 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1388394 00:18:34.505 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 1388394 ']' 00:18:34.505 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 1388394 00:18:34.505 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:18:34.505 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:34.505 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1388394 00:18:34.505 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:34.505 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:34.505 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1388394' 00:18:34.505 killing process with pid 1388394 00:18:34.505 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 1388394 00:18:34.505 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 1388394 00:18:34.763 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:34.763 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:34.763 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:34.763 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:34.763 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:34.763 18:48:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.763 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.763 18:48:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.659 18:48:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:36.659 00:18:36.659 real 0m23.439s 00:18:36.659 user 1m19.744s 00:18:36.659 sys 0m6.672s 00:18:36.659 18:48:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:36.659 18:48:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.659 ************************************ 00:18:36.659 END TEST nvmf_fio_target 00:18:36.659 ************************************ 00:18:36.659 18:48:46 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:36.659 18:48:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:36.659 18:48:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:36.659 18:48:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:36.659 ************************************ 00:18:36.659 START TEST nvmf_bdevio 00:18:36.659 ************************************ 00:18:36.659 18:48:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:36.659 * Looking for test storage... 00:18:36.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:36.659 18:48:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.659 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.917 18:48:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:36.918 18:48:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:36.918 18:48:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.918 18:48:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.918 18:48:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.918 18:48:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:36.918 18:48:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:36.918 18:48:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:36.918 18:48:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:38.820 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:38.820 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:38.820 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:38.820 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:38.820 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:38.821 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.079 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.079 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.079 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:39.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:18:39.079 00:18:39.079 --- 10.0.0.2 ping statistics --- 00:18:39.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.079 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:18:39.079 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:18:39.079 00:18:39.079 --- 10.0.0.1 ping statistics --- 00:18:39.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.079 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:18:39.079 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.079 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:39.079 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:39.079 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.079 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:39.079 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:39.079 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.079 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:39.079 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:39.080 18:48:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:39.080 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.080 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:39.080 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:39.080 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1393132 00:18:39.080 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:39.080 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1393132 00:18:39.080 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 1393132 ']' 00:18:39.080 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.080 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:39.080 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.080 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:39.080 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:39.080 [2024-07-20 18:48:49.260226] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:39.080 [2024-07-20 18:48:49.260323] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.080 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.080 [2024-07-20 18:48:49.330548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.338 [2024-07-20 18:48:49.425569] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.338 [2024-07-20 18:48:49.425625] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.338 [2024-07-20 18:48:49.425651] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.338 [2024-07-20 18:48:49.425665] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.338 [2024-07-20 18:48:49.425677] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.338 [2024-07-20 18:48:49.425808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:39.338 [2024-07-20 18:48:49.425901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:39.338 [2024-07-20 18:48:49.425969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.338 [2024-07-20 18:48:49.425966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:39.338 [2024-07-20 18:48:49.571346] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:39.338 Malloc0 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:39.338 [2024-07-20 18:48:49.622493] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:39.338 { 00:18:39.338 "params": { 00:18:39.338 "name": "Nvme$subsystem", 00:18:39.338 "trtype": "$TEST_TRANSPORT", 00:18:39.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:39.338 "adrfam": "ipv4", 00:18:39.338 "trsvcid": "$NVMF_PORT", 00:18:39.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:39.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:39.338 "hdgst": ${hdgst:-false}, 00:18:39.338 "ddgst": ${ddgst:-false} 00:18:39.338 }, 00:18:39.338 "method": "bdev_nvme_attach_controller" 00:18:39.338 } 00:18:39.338 EOF 00:18:39.338 )") 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:39.338 18:48:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:39.338 "params": { 00:18:39.338 "name": "Nvme1", 00:18:39.338 "trtype": "tcp", 00:18:39.338 "traddr": "10.0.0.2", 00:18:39.338 "adrfam": "ipv4", 00:18:39.338 "trsvcid": "4420", 00:18:39.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.338 "hdgst": false, 00:18:39.338 "ddgst": false 00:18:39.338 }, 00:18:39.338 "method": "bdev_nvme_attach_controller" 00:18:39.338 }' 00:18:39.597 [2024-07-20 18:48:49.665119] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:39.597 [2024-07-20 18:48:49.665198] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393166 ] 00:18:39.597 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.597 [2024-07-20 18:48:49.726943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:39.597 [2024-07-20 18:48:49.816747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.597 [2024-07-20 18:48:49.816815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.597 [2024-07-20 18:48:49.816820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.855 I/O targets: 00:18:39.855 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:39.855 00:18:39.855 00:18:39.855 CUnit - A unit testing framework for C - Version 2.1-3 00:18:39.855 http://cunit.sourceforge.net/ 00:18:39.855 00:18:39.855 00:18:39.855 Suite: bdevio tests on: Nvme1n1 00:18:39.855 Test: blockdev write read block ...passed 00:18:40.113 Test: blockdev write zeroes read block ...passed 00:18:40.113 Test: blockdev write zeroes read no split ...passed 00:18:40.113 Test: blockdev write zeroes read split ...passed 00:18:40.113 Test: blockdev write zeroes read split partial ...passed 00:18:40.113 Test: blockdev reset ...[2024-07-20 18:48:50.306662] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:40.113 [2024-07-20 18:48:50.306777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb3f80 (9): Bad file descriptor 00:18:40.113 [2024-07-20 18:48:50.365084] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:40.113 passed 00:18:40.113 Test: blockdev write read 8 blocks ...passed 00:18:40.113 Test: blockdev write read size > 128k ...passed 00:18:40.113 Test: blockdev write read invalid size ...passed 00:18:40.372 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:40.372 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:40.372 Test: blockdev write read max offset ...passed 00:18:40.372 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:40.372 Test: blockdev writev readv 8 blocks ...passed 00:18:40.372 Test: blockdev writev readv 30 x 1block ...passed 00:18:40.372 Test: blockdev writev readv block ...passed 00:18:40.372 Test: blockdev writev readv size > 128k ...passed 00:18:40.373 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:40.373 Test: blockdev comparev and writev ...[2024-07-20 18:48:50.634268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.373 [2024-07-20 18:48:50.634304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.373 [2024-07-20 18:48:50.634337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.373 [2024-07-20 18:48:50.634355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.373 [2024-07-20 18:48:50.634832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.373 [2024-07-20 18:48:50.634859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:40.373 [2024-07-20 18:48:50.634881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.373 [2024-07-20 18:48:50.634899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:40.373 [2024-07-20 18:48:50.635357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.373 [2024-07-20 18:48:50.635383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:40.373 [2024-07-20 18:48:50.635406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.373 [2024-07-20 18:48:50.635424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:40.373 [2024-07-20 18:48:50.635914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.373 [2024-07-20 18:48:50.635939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:40.373 [2024-07-20 18:48:50.635961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.373 [2024-07-20 18:48:50.635978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:40.373 passed 00:18:40.631 Test: blockdev nvme passthru rw ...passed 00:18:40.631 Test: blockdev nvme passthru vendor specific ...[2024-07-20 18:48:50.718365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:40.631 [2024-07-20 18:48:50.718392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:40.631 [2024-07-20 18:48:50.718708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:40.631 [2024-07-20 18:48:50.718731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:40.631 [2024-07-20 18:48:50.719045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:40.631 [2024-07-20 18:48:50.719069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:40.631 [2024-07-20 18:48:50.719384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:40.631 [2024-07-20 18:48:50.719407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:40.631 passed 00:18:40.631 Test: blockdev nvme admin passthru ...passed 00:18:40.631 Test: blockdev copy ...passed 00:18:40.631 00:18:40.631 Run Summary: Type Total Ran Passed Failed Inactive 00:18:40.631 suites 1 1 n/a 0 0 00:18:40.631 tests 23 23 23 0 0 00:18:40.631 asserts 152 152 152 0 n/a 00:18:40.631 00:18:40.631 Elapsed time = 1.319 seconds 00:18:40.889 18:48:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:40.889 18:48:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.889 18:48:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:40.889 18:48:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.889 18:48:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:40.889 18:48:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:40.889 18:48:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:40.889 18:48:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:40.889 18:48:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:40.889 18:48:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:40.889 18:48:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:40.889 18:48:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:40.889 rmmod nvme_tcp 00:18:40.889 rmmod nvme_fabrics 00:18:40.889 rmmod nvme_keyring 00:18:40.889 18:48:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:40.889 18:48:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:40.889 18:48:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:40.889 18:48:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1393132 ']' 00:18:40.889 18:48:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1393132 00:18:40.889 18:48:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 1393132 ']' 00:18:40.889 18:48:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 1393132 00:18:40.889 18:48:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:18:40.890 18:48:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:40.890 18:48:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1393132 00:18:40.890 18:48:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:18:40.890 18:48:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:18:40.890 18:48:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1393132' 00:18:40.890 killing process with pid 1393132 00:18:40.890 18:48:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 1393132 00:18:40.890 18:48:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 1393132 00:18:41.148 18:48:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:41.148 18:48:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:41.148 18:48:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:41.148 18:48:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:41.148 18:48:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:41.148 18:48:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.148 18:48:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.148 18:48:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.046 18:48:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:43.046 00:18:43.046 real 0m6.415s 00:18:43.046 user 0m10.650s 00:18:43.046 sys 0m2.115s 00:18:43.046 18:48:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:43.046 18:48:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:43.046 ************************************ 00:18:43.046 END TEST nvmf_bdevio 00:18:43.046 ************************************ 00:18:43.046 18:48:53 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:43.046 18:48:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:43.046 18:48:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:43.046 18:48:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:43.304 ************************************ 00:18:43.304 START TEST nvmf_auth_target 00:18:43.304 ************************************ 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:43.304 * Looking for test storage... 00:18:43.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.304 18:48:53 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:43.305 18:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:45.204 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:45.204 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:45.204 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:45.204 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:45.204 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:45.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:18:45.462 00:18:45.462 --- 10.0.0.2 ping statistics --- 00:18:45.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.462 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:18:45.462 00:18:45.462 --- 10.0.0.1 ping statistics --- 00:18:45.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.462 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1395343 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1395343 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1395343 ']' 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:45.462 18:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1395377 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=09e35d295c703cdcc15388327a51ec073180c65f296d39ba 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ETz 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 09e35d295c703cdcc15388327a51ec073180c65f296d39ba 0 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 09e35d295c703cdcc15388327a51ec073180c65f296d39ba 0 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=09e35d295c703cdcc15388327a51ec073180c65f296d39ba 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:45.747 18:48:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ETz 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ETz 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.ETz 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=905039ed18fa7c78ccc9645492461824ad1568acd9cde039d384ca6486cd1ec9 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1RM 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 905039ed18fa7c78ccc9645492461824ad1568acd9cde039d384ca6486cd1ec9 3 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 905039ed18fa7c78ccc9645492461824ad1568acd9cde039d384ca6486cd1ec9 3 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=905039ed18fa7c78ccc9645492461824ad1568acd9cde039d384ca6486cd1ec9 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1RM 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1RM 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.1RM 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:45.747 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=10f60459c26c6628f66dd8f26e08c052 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Hmh 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 10f60459c26c6628f66dd8f26e08c052 1 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 10f60459c26c6628f66dd8f26e08c052 1 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=10f60459c26c6628f66dd8f26e08c052 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Hmh 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Hmh 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Hmh 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=28450ec7e1884b682ef993b6aa804304e2b41cf04db7e7dc 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zUx 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 28450ec7e1884b682ef993b6aa804304e2b41cf04db7e7dc 2 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 28450ec7e1884b682ef993b6aa804304e2b41cf04db7e7dc 2 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=28450ec7e1884b682ef993b6aa804304e2b41cf04db7e7dc 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zUx 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zUx 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.zUx 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9cf4ae22b56a0233aef125828ac0580e3997fe6dbbbece2d 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.WOA 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9cf4ae22b56a0233aef125828ac0580e3997fe6dbbbece2d 2 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9cf4ae22b56a0233aef125828ac0580e3997fe6dbbbece2d 2 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9cf4ae22b56a0233aef125828ac0580e3997fe6dbbbece2d 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.WOA 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.WOA 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.WOA 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=69b711daf1187341bea0bacb311ad07a 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Ata 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 69b711daf1187341bea0bacb311ad07a 1 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 69b711daf1187341bea0bacb311ad07a 1 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=69b711daf1187341bea0bacb311ad07a 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Ata 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Ata 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Ata 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=05983b5d216cf76fb2d31575da3ade67d918ba7adffaf64523157c3e7d677081 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gGF 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 05983b5d216cf76fb2d31575da3ade67d918ba7adffaf64523157c3e7d677081 3 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 05983b5d216cf76fb2d31575da3ade67d918ba7adffaf64523157c3e7d677081 3 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=05983b5d216cf76fb2d31575da3ade67d918ba7adffaf64523157c3e7d677081 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gGF 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gGF 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.gGF 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1395343 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1395343 ']' 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:46.005 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.262 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:46.262 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:46.262 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1395377 /var/tmp/host.sock 00:18:46.262 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1395377 ']' 00:18:46.262 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:18:46.262 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:46.262 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:46.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:46.262 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:46.262 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.520 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:46.520 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:46.520 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:46.520 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.520 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.520 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.520 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:46.520 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ETz 00:18:46.520 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.520 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.520 18:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.520 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ETz 00:18:46.520 18:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ETz 00:18:46.776 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.1RM ]] 00:18:46.776 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1RM 00:18:46.776 18:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.776 18:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.776 18:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.777 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1RM 00:18:46.777 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1RM 00:18:47.033 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:47.033 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Hmh 00:18:47.033 18:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.033 18:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.033 18:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.033 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Hmh 00:18:47.033 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Hmh 00:18:47.289 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.zUx ]] 00:18:47.289 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zUx 00:18:47.289 18:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.289 18:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.289 18:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.289 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zUx 00:18:47.290 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zUx 00:18:47.547 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:47.547 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.WOA 00:18:47.547 18:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.547 18:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.547 18:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.547 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.WOA 00:18:47.547 18:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.WOA 00:18:47.806 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Ata ]] 00:18:47.806 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ata 00:18:47.806 18:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.806 18:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.806 18:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.806 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ata 00:18:47.806 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ata 00:18:48.064 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:48.064 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gGF 00:18:48.064 18:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.064 18:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.064 18:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.064 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.gGF 00:18:48.064 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.gGF 00:18:48.321 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:48.321 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:48.321 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.321 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.321 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:48.321 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:48.578 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:48.578 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.578 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.578 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:48.578 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:48.578 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.578 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.578 18:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.578 18:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.578 18:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.578 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.578 18:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.836 00:18:48.836 18:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.836 18:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.836 18:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.093 18:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.093 18:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.093 18:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.093 18:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.093 18:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.093 18:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.093 { 00:18:49.093 "cntlid": 1, 00:18:49.093 "qid": 0, 00:18:49.093 "state": "enabled", 00:18:49.093 "listen_address": { 00:18:49.093 "trtype": "TCP", 00:18:49.093 "adrfam": "IPv4", 00:18:49.093 "traddr": "10.0.0.2", 00:18:49.093 "trsvcid": "4420" 00:18:49.093 }, 00:18:49.093 "peer_address": { 00:18:49.093 "trtype": "TCP", 00:18:49.093 "adrfam": "IPv4", 00:18:49.093 "traddr": "10.0.0.1", 00:18:49.093 "trsvcid": "50032" 00:18:49.093 }, 00:18:49.093 "auth": { 00:18:49.093 "state": "completed", 00:18:49.093 "digest": "sha256", 00:18:49.093 "dhgroup": "null" 00:18:49.093 } 00:18:49.093 } 00:18:49.093 ]' 00:18:49.093 18:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.350 18:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.350 18:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.350 18:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:49.350 18:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.350 18:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.350 18:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.350 18:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.608 18:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:18:50.538 18:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.538 18:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.538 18:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.538 18:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.538 18:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.538 18:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.538 18:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:50.538 18:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:50.795 18:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:50.795 18:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.795 18:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.795 18:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:50.795 18:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:50.795 18:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.795 18:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.795 18:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.795 18:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.795 18:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.796 18:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.796 18:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.053 00:18:51.053 18:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.053 18:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.053 18:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.310 18:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.310 18:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.310 18:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.310 18:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.310 18:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.310 18:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.310 { 00:18:51.310 "cntlid": 3, 00:18:51.310 "qid": 0, 00:18:51.310 "state": "enabled", 00:18:51.310 "listen_address": { 00:18:51.310 "trtype": "TCP", 00:18:51.310 "adrfam": "IPv4", 00:18:51.310 "traddr": "10.0.0.2", 00:18:51.310 "trsvcid": "4420" 00:18:51.310 }, 00:18:51.310 "peer_address": { 00:18:51.310 "trtype": "TCP", 00:18:51.310 "adrfam": "IPv4", 00:18:51.310 "traddr": "10.0.0.1", 00:18:51.310 "trsvcid": "50052" 00:18:51.310 }, 00:18:51.310 "auth": { 00:18:51.310 "state": "completed", 00:18:51.310 "digest": "sha256", 00:18:51.310 "dhgroup": "null" 00:18:51.310 } 00:18:51.310 } 00:18:51.310 ]' 00:18:51.310 18:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.310 18:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.310 18:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.310 18:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:51.310 18:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.310 18:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.310 18:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.310 18:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.568 18:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:18:52.500 18:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.500 18:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.500 18:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.500 18:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.500 18:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.500 18:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.500 18:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:52.500 18:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:52.762 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:52.762 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.762 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:52.762 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:52.762 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:52.762 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.762 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.762 18:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.762 18:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.762 18:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.762 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.762 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.020 00:18:53.020 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.020 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.020 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.277 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.277 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.277 18:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.277 18:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.277 18:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.277 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.277 { 00:18:53.277 "cntlid": 5, 00:18:53.277 "qid": 0, 00:18:53.277 "state": "enabled", 00:18:53.277 "listen_address": { 00:18:53.277 "trtype": "TCP", 00:18:53.277 "adrfam": "IPv4", 00:18:53.277 "traddr": "10.0.0.2", 00:18:53.277 "trsvcid": "4420" 00:18:53.277 }, 00:18:53.277 "peer_address": { 00:18:53.277 "trtype": "TCP", 00:18:53.277 "adrfam": "IPv4", 00:18:53.277 "traddr": "10.0.0.1", 00:18:53.277 "trsvcid": "50082" 00:18:53.277 }, 00:18:53.277 "auth": { 00:18:53.277 "state": "completed", 00:18:53.277 "digest": "sha256", 00:18:53.277 "dhgroup": "null" 00:18:53.277 } 00:18:53.277 } 00:18:53.277 ]' 00:18:53.277 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.534 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.534 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.534 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:53.534 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.534 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.534 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.534 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.792 18:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:18:54.725 18:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.725 18:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.725 18:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.725 18:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.725 18:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.725 18:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.725 18:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:54.725 18:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:54.983 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:54.983 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.983 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:54.983 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:54.983 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:54.983 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.983 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:54.983 18:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.983 18:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.983 18:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.983 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.983 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.241 00:18:55.241 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.241 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.241 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.500 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.500 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.500 18:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.500 18:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.500 18:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.500 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.500 { 00:18:55.500 "cntlid": 7, 00:18:55.500 "qid": 0, 00:18:55.500 "state": "enabled", 00:18:55.500 "listen_address": { 00:18:55.500 "trtype": "TCP", 00:18:55.500 "adrfam": "IPv4", 00:18:55.500 "traddr": "10.0.0.2", 00:18:55.500 "trsvcid": "4420" 00:18:55.500 }, 00:18:55.500 "peer_address": { 00:18:55.500 "trtype": "TCP", 00:18:55.500 "adrfam": "IPv4", 00:18:55.500 "traddr": "10.0.0.1", 00:18:55.500 "trsvcid": "32984" 00:18:55.500 }, 00:18:55.500 "auth": { 00:18:55.500 "state": "completed", 00:18:55.500 "digest": "sha256", 00:18:55.500 "dhgroup": "null" 00:18:55.500 } 00:18:55.500 } 00:18:55.500 ]' 00:18:55.500 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.500 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.500 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.500 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:55.500 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.500 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.500 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.500 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.758 18:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:18:56.693 18:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.693 18:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:56.693 18:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.693 18:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.693 18:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.693 18:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.693 18:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.693 18:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.693 18:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.951 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:56.951 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.951 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:56.951 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:56.951 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.951 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.951 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.951 18:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.951 18:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.951 18:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.951 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.951 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.517 00:18:57.517 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.517 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.517 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.517 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.517 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.517 18:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.517 18:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.517 18:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.517 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.517 { 00:18:57.517 "cntlid": 9, 00:18:57.517 "qid": 0, 00:18:57.517 "state": "enabled", 00:18:57.517 "listen_address": { 00:18:57.517 "trtype": "TCP", 00:18:57.517 "adrfam": "IPv4", 00:18:57.517 "traddr": "10.0.0.2", 00:18:57.517 "trsvcid": "4420" 00:18:57.517 }, 00:18:57.517 "peer_address": { 00:18:57.517 "trtype": "TCP", 00:18:57.517 "adrfam": "IPv4", 00:18:57.517 "traddr": "10.0.0.1", 00:18:57.517 "trsvcid": "33006" 00:18:57.517 }, 00:18:57.517 "auth": { 00:18:57.517 "state": "completed", 00:18:57.517 "digest": "sha256", 00:18:57.517 "dhgroup": "ffdhe2048" 00:18:57.517 } 00:18:57.517 } 00:18:57.517 ]' 00:18:57.517 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.775 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.775 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.775 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.775 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.775 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.775 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.775 18:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.032 18:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:18:58.971 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.971 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.971 18:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.971 18:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.971 18:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.971 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.972 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:58.972 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:59.281 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:59.281 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.281 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.281 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:59.281 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:59.281 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.281 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.281 18:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.281 18:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.281 18:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.281 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.281 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.539 00:18:59.539 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.539 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.539 18:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.797 18:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.797 18:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.797 18:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.797 18:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.797 18:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.797 18:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.797 { 00:18:59.797 "cntlid": 11, 00:18:59.797 "qid": 0, 00:18:59.797 "state": "enabled", 00:18:59.797 "listen_address": { 00:18:59.797 "trtype": "TCP", 00:18:59.797 "adrfam": "IPv4", 00:18:59.797 "traddr": "10.0.0.2", 00:18:59.797 "trsvcid": "4420" 00:18:59.797 }, 00:18:59.797 "peer_address": { 00:18:59.797 "trtype": "TCP", 00:18:59.797 "adrfam": "IPv4", 00:18:59.797 "traddr": "10.0.0.1", 00:18:59.797 "trsvcid": "33036" 00:18:59.797 }, 00:18:59.797 "auth": { 00:18:59.797 "state": "completed", 00:18:59.797 "digest": "sha256", 00:18:59.797 "dhgroup": "ffdhe2048" 00:18:59.797 } 00:18:59.797 } 00:18:59.797 ]' 00:18:59.797 18:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.797 18:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.797 18:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.054 18:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:00.054 18:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.054 18:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.054 18:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.054 18:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.311 18:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:19:01.243 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.243 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.243 18:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.243 18:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.243 18:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.243 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.243 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:01.243 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:01.501 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:01.501 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.501 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.501 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:01.501 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:01.501 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.501 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.501 18:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.501 18:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.501 18:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.501 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.501 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.758 00:19:01.758 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.758 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.758 18:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.015 18:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.015 18:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.015 18:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.015 18:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.015 18:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.015 18:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.015 { 00:19:02.015 "cntlid": 13, 00:19:02.015 "qid": 0, 00:19:02.015 "state": "enabled", 00:19:02.015 "listen_address": { 00:19:02.015 "trtype": "TCP", 00:19:02.015 "adrfam": "IPv4", 00:19:02.015 "traddr": "10.0.0.2", 00:19:02.015 "trsvcid": "4420" 00:19:02.015 }, 00:19:02.015 "peer_address": { 00:19:02.015 "trtype": "TCP", 00:19:02.015 "adrfam": "IPv4", 00:19:02.015 "traddr": "10.0.0.1", 00:19:02.015 "trsvcid": "33074" 00:19:02.015 }, 00:19:02.015 "auth": { 00:19:02.015 "state": "completed", 00:19:02.015 "digest": "sha256", 00:19:02.015 "dhgroup": "ffdhe2048" 00:19:02.015 } 00:19:02.015 } 00:19:02.015 ]' 00:19:02.016 18:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.016 18:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.016 18:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.016 18:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.016 18:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.273 18:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.273 18:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.273 18:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.531 18:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:19:03.461 18:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.461 18:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.461 18:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.461 18:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.461 18:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.461 18:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.461 18:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:03.461 18:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:03.719 18:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:03.719 18:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.719 18:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:03.719 18:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:03.719 18:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:03.719 18:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.719 18:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:03.719 18:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.719 18:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.719 18:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.719 18:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.719 18:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.977 00:19:03.977 18:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.977 18:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.977 18:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.234 18:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.234 18:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.234 18:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.234 18:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.234 18:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.234 18:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.234 { 00:19:04.234 "cntlid": 15, 00:19:04.234 "qid": 0, 00:19:04.234 "state": "enabled", 00:19:04.234 "listen_address": { 00:19:04.234 "trtype": "TCP", 00:19:04.234 "adrfam": "IPv4", 00:19:04.234 "traddr": "10.0.0.2", 00:19:04.234 "trsvcid": "4420" 00:19:04.234 }, 00:19:04.234 "peer_address": { 00:19:04.234 "trtype": "TCP", 00:19:04.234 "adrfam": "IPv4", 00:19:04.234 "traddr": "10.0.0.1", 00:19:04.234 "trsvcid": "39850" 00:19:04.234 }, 00:19:04.234 "auth": { 00:19:04.234 "state": "completed", 00:19:04.234 "digest": "sha256", 00:19:04.234 "dhgroup": "ffdhe2048" 00:19:04.234 } 00:19:04.234 } 00:19:04.234 ]' 00:19:04.234 18:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.234 18:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.234 18:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.234 18:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:04.234 18:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.234 18:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.234 18:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.234 18:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.492 18:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:19:05.426 18:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.426 18:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.426 18:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.426 18:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.426 18:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.426 18:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.426 18:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.426 18:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:05.426 18:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:05.685 18:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:05.685 18:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.685 18:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:05.685 18:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:05.685 18:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:05.685 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.685 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.685 18:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.685 18:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.685 18:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.685 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.944 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.201 00:19:06.201 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.201 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.201 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.457 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.457 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.457 18:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.457 18:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.457 18:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.457 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.457 { 00:19:06.457 "cntlid": 17, 00:19:06.457 "qid": 0, 00:19:06.457 "state": "enabled", 00:19:06.457 "listen_address": { 00:19:06.457 "trtype": "TCP", 00:19:06.457 "adrfam": "IPv4", 00:19:06.457 "traddr": "10.0.0.2", 00:19:06.457 "trsvcid": "4420" 00:19:06.457 }, 00:19:06.457 "peer_address": { 00:19:06.457 "trtype": "TCP", 00:19:06.457 "adrfam": "IPv4", 00:19:06.457 "traddr": "10.0.0.1", 00:19:06.457 "trsvcid": "39888" 00:19:06.457 }, 00:19:06.457 "auth": { 00:19:06.457 "state": "completed", 00:19:06.457 "digest": "sha256", 00:19:06.457 "dhgroup": "ffdhe3072" 00:19:06.457 } 00:19:06.457 } 00:19:06.457 ]' 00:19:06.457 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.457 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.457 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.457 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:06.457 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.457 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.457 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.457 18:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.714 18:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:19:07.644 18:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.644 18:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:07.644 18:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.644 18:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.644 18:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.644 18:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.644 18:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:07.644 18:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:08.207 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:08.207 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.207 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:08.207 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:08.207 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.207 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.207 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.207 18:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.207 18:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.207 18:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.207 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.208 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.464 00:19:08.464 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.464 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.464 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.721 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.721 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.721 18:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.721 18:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.721 18:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.721 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.721 { 00:19:08.721 "cntlid": 19, 00:19:08.721 "qid": 0, 00:19:08.721 "state": "enabled", 00:19:08.721 "listen_address": { 00:19:08.721 "trtype": "TCP", 00:19:08.721 "adrfam": "IPv4", 00:19:08.721 "traddr": "10.0.0.2", 00:19:08.721 "trsvcid": "4420" 00:19:08.721 }, 00:19:08.721 "peer_address": { 00:19:08.721 "trtype": "TCP", 00:19:08.721 "adrfam": "IPv4", 00:19:08.721 "traddr": "10.0.0.1", 00:19:08.721 "trsvcid": "39900" 00:19:08.721 }, 00:19:08.721 "auth": { 00:19:08.721 "state": "completed", 00:19:08.721 "digest": "sha256", 00:19:08.721 "dhgroup": "ffdhe3072" 00:19:08.721 } 00:19:08.721 } 00:19:08.721 ]' 00:19:08.721 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.721 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.721 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.721 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:08.721 18:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.721 18:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.721 18:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.721 18:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.978 18:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:19:09.907 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.907 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.907 18:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.907 18:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.907 18:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.907 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.907 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:09.907 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:10.471 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:10.471 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.471 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.471 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:10.471 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.471 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.471 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.471 18:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.471 18:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.471 18:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.471 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.471 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.727 00:19:10.727 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.727 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.727 18:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.985 18:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.985 18:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.985 18:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.985 18:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.985 18:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.985 18:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.985 { 00:19:10.985 "cntlid": 21, 00:19:10.985 "qid": 0, 00:19:10.985 "state": "enabled", 00:19:10.985 "listen_address": { 00:19:10.985 "trtype": "TCP", 00:19:10.985 "adrfam": "IPv4", 00:19:10.985 "traddr": "10.0.0.2", 00:19:10.985 "trsvcid": "4420" 00:19:10.985 }, 00:19:10.985 "peer_address": { 00:19:10.985 "trtype": "TCP", 00:19:10.985 "adrfam": "IPv4", 00:19:10.985 "traddr": "10.0.0.1", 00:19:10.985 "trsvcid": "39934" 00:19:10.985 }, 00:19:10.985 "auth": { 00:19:10.985 "state": "completed", 00:19:10.985 "digest": "sha256", 00:19:10.985 "dhgroup": "ffdhe3072" 00:19:10.985 } 00:19:10.985 } 00:19:10.985 ]' 00:19:10.985 18:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.985 18:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.985 18:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.985 18:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:10.985 18:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.985 18:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.985 18:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.985 18:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.242 18:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:19:12.174 18:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.174 18:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.174 18:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.174 18:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.174 18:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.174 18:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.174 18:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:12.174 18:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:12.432 18:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:12.432 18:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.432 18:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:12.432 18:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:12.432 18:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.432 18:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.432 18:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:12.432 18:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.432 18:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.432 18:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.432 18:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.432 18:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.031 00:19:13.031 18:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.031 18:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.031 18:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.288 18:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.288 18:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.288 18:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.288 18:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.288 18:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.288 18:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.288 { 00:19:13.288 "cntlid": 23, 00:19:13.288 "qid": 0, 00:19:13.288 "state": "enabled", 00:19:13.288 "listen_address": { 00:19:13.288 "trtype": "TCP", 00:19:13.288 "adrfam": "IPv4", 00:19:13.288 "traddr": "10.0.0.2", 00:19:13.288 "trsvcid": "4420" 00:19:13.288 }, 00:19:13.288 "peer_address": { 00:19:13.288 "trtype": "TCP", 00:19:13.288 "adrfam": "IPv4", 00:19:13.288 "traddr": "10.0.0.1", 00:19:13.288 "trsvcid": "39962" 00:19:13.288 }, 00:19:13.288 "auth": { 00:19:13.288 "state": "completed", 00:19:13.288 "digest": "sha256", 00:19:13.288 "dhgroup": "ffdhe3072" 00:19:13.288 } 00:19:13.288 } 00:19:13.288 ]' 00:19:13.288 18:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.288 18:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.288 18:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.289 18:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:13.289 18:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.289 18:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.289 18:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.289 18:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.545 18:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:19:14.476 18:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.476 18:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.476 18:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.476 18:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.476 18:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.476 18:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.476 18:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.476 18:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:14.476 18:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:14.734 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:14.734 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.734 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:14.734 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:14.734 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.734 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.734 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.734 18:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.734 18:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.734 18:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.734 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.734 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.299 00:19:15.299 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.299 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.299 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.556 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.556 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.556 18:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.556 18:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.556 18:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.556 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.556 { 00:19:15.556 "cntlid": 25, 00:19:15.556 "qid": 0, 00:19:15.556 "state": "enabled", 00:19:15.556 "listen_address": { 00:19:15.556 "trtype": "TCP", 00:19:15.556 "adrfam": "IPv4", 00:19:15.556 "traddr": "10.0.0.2", 00:19:15.556 "trsvcid": "4420" 00:19:15.556 }, 00:19:15.556 "peer_address": { 00:19:15.556 "trtype": "TCP", 00:19:15.556 "adrfam": "IPv4", 00:19:15.556 "traddr": "10.0.0.1", 00:19:15.556 "trsvcid": "34692" 00:19:15.556 }, 00:19:15.556 "auth": { 00:19:15.556 "state": "completed", 00:19:15.556 "digest": "sha256", 00:19:15.556 "dhgroup": "ffdhe4096" 00:19:15.556 } 00:19:15.556 } 00:19:15.556 ]' 00:19:15.556 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.556 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.556 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.556 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:15.556 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.556 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.556 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.556 18:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.814 18:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:19:16.746 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.746 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.746 18:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.746 18:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.746 18:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.746 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.746 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:16.746 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.004 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:17.004 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.004 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.004 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:17.004 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:17.004 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.004 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.004 18:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.004 18:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.262 18:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.262 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.262 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.521 00:19:17.521 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.521 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.521 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.779 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.779 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.779 18:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.779 18:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.779 18:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.779 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.779 { 00:19:17.779 "cntlid": 27, 00:19:17.779 "qid": 0, 00:19:17.779 "state": "enabled", 00:19:17.779 "listen_address": { 00:19:17.779 "trtype": "TCP", 00:19:17.779 "adrfam": "IPv4", 00:19:17.779 "traddr": "10.0.0.2", 00:19:17.779 "trsvcid": "4420" 00:19:17.779 }, 00:19:17.779 "peer_address": { 00:19:17.779 "trtype": "TCP", 00:19:17.779 "adrfam": "IPv4", 00:19:17.779 "traddr": "10.0.0.1", 00:19:17.779 "trsvcid": "34726" 00:19:17.779 }, 00:19:17.779 "auth": { 00:19:17.779 "state": "completed", 00:19:17.779 "digest": "sha256", 00:19:17.779 "dhgroup": "ffdhe4096" 00:19:17.779 } 00:19:17.779 } 00:19:17.779 ]' 00:19:17.779 18:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.779 18:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.779 18:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.779 18:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:17.779 18:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.037 18:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.037 18:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.037 18:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.038 18:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.410 18:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.974 00:19:19.975 18:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.975 18:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.975 18:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.232 18:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.232 18:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.232 18:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.232 18:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.232 18:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.232 18:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.232 { 00:19:20.232 "cntlid": 29, 00:19:20.232 "qid": 0, 00:19:20.232 "state": "enabled", 00:19:20.232 "listen_address": { 00:19:20.232 "trtype": "TCP", 00:19:20.232 "adrfam": "IPv4", 00:19:20.232 "traddr": "10.0.0.2", 00:19:20.232 "trsvcid": "4420" 00:19:20.232 }, 00:19:20.232 "peer_address": { 00:19:20.232 "trtype": "TCP", 00:19:20.232 "adrfam": "IPv4", 00:19:20.232 "traddr": "10.0.0.1", 00:19:20.232 "trsvcid": "34742" 00:19:20.232 }, 00:19:20.232 "auth": { 00:19:20.232 "state": "completed", 00:19:20.232 "digest": "sha256", 00:19:20.232 "dhgroup": "ffdhe4096" 00:19:20.232 } 00:19:20.232 } 00:19:20.232 ]' 00:19:20.232 18:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.232 18:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.232 18:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.232 18:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:20.232 18:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.232 18:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.232 18:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.232 18:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.489 18:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:19:21.420 18:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.420 18:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.420 18:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.420 18:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.420 18:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.420 18:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.420 18:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:21.420 18:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:21.677 18:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:21.677 18:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.677 18:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.677 18:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:21.677 18:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:21.677 18:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.677 18:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:21.677 18:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.677 18:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.677 18:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.677 18:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.677 18:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.242 00:19:22.242 18:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.242 18:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.242 18:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.500 18:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.500 18:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.500 18:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.500 18:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.500 18:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.500 18:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.500 { 00:19:22.500 "cntlid": 31, 00:19:22.500 "qid": 0, 00:19:22.500 "state": "enabled", 00:19:22.500 "listen_address": { 00:19:22.500 "trtype": "TCP", 00:19:22.500 "adrfam": "IPv4", 00:19:22.500 "traddr": "10.0.0.2", 00:19:22.500 "trsvcid": "4420" 00:19:22.500 }, 00:19:22.500 "peer_address": { 00:19:22.500 "trtype": "TCP", 00:19:22.500 "adrfam": "IPv4", 00:19:22.500 "traddr": "10.0.0.1", 00:19:22.500 "trsvcid": "34770" 00:19:22.500 }, 00:19:22.500 "auth": { 00:19:22.500 "state": "completed", 00:19:22.500 "digest": "sha256", 00:19:22.500 "dhgroup": "ffdhe4096" 00:19:22.500 } 00:19:22.500 } 00:19:22.500 ]' 00:19:22.500 18:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.500 18:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.500 18:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.500 18:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.500 18:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.500 18:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.500 18:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.500 18:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.763 18:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:19:23.694 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.694 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.694 18:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.694 18:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.952 18:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.952 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.952 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.952 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:23.952 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:24.209 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:24.209 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.209 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.209 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:24.209 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:24.209 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.209 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.209 18:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.209 18:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.209 18:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.209 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.209 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.773 00:19:24.774 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.774 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.774 18:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.030 18:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.030 18:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.030 18:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.030 18:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.030 18:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.030 18:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.030 { 00:19:25.030 "cntlid": 33, 00:19:25.030 "qid": 0, 00:19:25.030 "state": "enabled", 00:19:25.030 "listen_address": { 00:19:25.030 "trtype": "TCP", 00:19:25.030 "adrfam": "IPv4", 00:19:25.030 "traddr": "10.0.0.2", 00:19:25.030 "trsvcid": "4420" 00:19:25.030 }, 00:19:25.030 "peer_address": { 00:19:25.030 "trtype": "TCP", 00:19:25.030 "adrfam": "IPv4", 00:19:25.030 "traddr": "10.0.0.1", 00:19:25.030 "trsvcid": "34882" 00:19:25.030 }, 00:19:25.030 "auth": { 00:19:25.030 "state": "completed", 00:19:25.030 "digest": "sha256", 00:19:25.030 "dhgroup": "ffdhe6144" 00:19:25.030 } 00:19:25.030 } 00:19:25.030 ]' 00:19:25.030 18:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.030 18:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.030 18:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.030 18:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:25.031 18:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.031 18:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.031 18:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.031 18:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.287 18:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:19:26.229 18:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.229 18:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.229 18:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.229 18:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.229 18:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.229 18:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.229 18:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:26.229 18:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:26.490 18:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:26.490 18:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.490 18:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.490 18:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:26.490 18:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:26.490 18:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.490 18:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.490 18:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.490 18:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.490 18:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.490 18:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.490 18:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.087 00:19:27.087 18:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.087 18:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.087 18:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.345 18:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.345 18:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.345 18:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.345 18:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.345 18:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.345 18:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.345 { 00:19:27.345 "cntlid": 35, 00:19:27.345 "qid": 0, 00:19:27.345 "state": "enabled", 00:19:27.345 "listen_address": { 00:19:27.345 "trtype": "TCP", 00:19:27.345 "adrfam": "IPv4", 00:19:27.345 "traddr": "10.0.0.2", 00:19:27.345 "trsvcid": "4420" 00:19:27.345 }, 00:19:27.345 "peer_address": { 00:19:27.345 "trtype": "TCP", 00:19:27.345 "adrfam": "IPv4", 00:19:27.345 "traddr": "10.0.0.1", 00:19:27.345 "trsvcid": "34914" 00:19:27.345 }, 00:19:27.345 "auth": { 00:19:27.345 "state": "completed", 00:19:27.345 "digest": "sha256", 00:19:27.345 "dhgroup": "ffdhe6144" 00:19:27.345 } 00:19:27.345 } 00:19:27.345 ]' 00:19:27.345 18:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.345 18:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.345 18:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.345 18:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:27.345 18:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.602 18:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.602 18:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.602 18:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.860 18:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:19:28.793 18:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.793 18:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.793 18:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.793 18:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.793 18:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.793 18:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.793 18:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:28.793 18:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:28.793 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:28.793 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.793 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.793 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:28.793 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.793 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.793 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.793 18:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.793 18:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.793 18:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.793 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.793 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.357 00:19:29.357 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.357 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.357 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.614 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.614 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.614 18:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.614 18:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.614 18:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.614 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.614 { 00:19:29.614 "cntlid": 37, 00:19:29.614 "qid": 0, 00:19:29.614 "state": "enabled", 00:19:29.614 "listen_address": { 00:19:29.614 "trtype": "TCP", 00:19:29.614 "adrfam": "IPv4", 00:19:29.614 "traddr": "10.0.0.2", 00:19:29.614 "trsvcid": "4420" 00:19:29.614 }, 00:19:29.614 "peer_address": { 00:19:29.614 "trtype": "TCP", 00:19:29.614 "adrfam": "IPv4", 00:19:29.614 "traddr": "10.0.0.1", 00:19:29.614 "trsvcid": "34940" 00:19:29.614 }, 00:19:29.614 "auth": { 00:19:29.614 "state": "completed", 00:19:29.614 "digest": "sha256", 00:19:29.614 "dhgroup": "ffdhe6144" 00:19:29.614 } 00:19:29.614 } 00:19:29.614 ]' 00:19:29.614 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.614 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.614 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.871 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:29.871 18:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.871 18:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.871 18:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.871 18:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.127 18:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:19:31.058 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.058 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.058 18:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.058 18:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.058 18:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.058 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.058 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.058 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.315 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:31.315 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.315 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.315 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.315 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:31.315 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.315 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:31.315 18:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.315 18:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.315 18:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.315 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.315 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.880 00:19:31.880 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.880 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.880 18:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.138 18:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.138 18:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.138 18:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.138 18:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.138 18:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.138 18:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.138 { 00:19:32.138 "cntlid": 39, 00:19:32.138 "qid": 0, 00:19:32.138 "state": "enabled", 00:19:32.138 "listen_address": { 00:19:32.138 "trtype": "TCP", 00:19:32.138 "adrfam": "IPv4", 00:19:32.138 "traddr": "10.0.0.2", 00:19:32.138 "trsvcid": "4420" 00:19:32.138 }, 00:19:32.138 "peer_address": { 00:19:32.138 "trtype": "TCP", 00:19:32.138 "adrfam": "IPv4", 00:19:32.138 "traddr": "10.0.0.1", 00:19:32.138 "trsvcid": "34956" 00:19:32.138 }, 00:19:32.138 "auth": { 00:19:32.138 "state": "completed", 00:19:32.138 "digest": "sha256", 00:19:32.138 "dhgroup": "ffdhe6144" 00:19:32.138 } 00:19:32.138 } 00:19:32.138 ]' 00:19:32.138 18:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.138 18:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.138 18:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.138 18:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.138 18:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.138 18:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.138 18:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.138 18:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.399 18:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:19:33.345 18:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.345 18:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.345 18:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.345 18:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.345 18:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.345 18:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.345 18:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.345 18:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:33.345 18:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:33.602 18:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:33.602 18:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.602 18:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.602 18:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:33.602 18:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:33.602 18:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.602 18:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.602 18:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.602 18:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.602 18:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.602 18:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.602 18:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.532 00:19:34.532 18:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.532 18:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.532 18:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.790 18:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.790 18:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.790 18:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.790 18:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.790 18:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.790 18:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.790 { 00:19:34.790 "cntlid": 41, 00:19:34.790 "qid": 0, 00:19:34.790 "state": "enabled", 00:19:34.790 "listen_address": { 00:19:34.790 "trtype": "TCP", 00:19:34.790 "adrfam": "IPv4", 00:19:34.790 "traddr": "10.0.0.2", 00:19:34.790 "trsvcid": "4420" 00:19:34.790 }, 00:19:34.790 "peer_address": { 00:19:34.790 "trtype": "TCP", 00:19:34.790 "adrfam": "IPv4", 00:19:34.790 "traddr": "10.0.0.1", 00:19:34.790 "trsvcid": "60184" 00:19:34.790 }, 00:19:34.790 "auth": { 00:19:34.790 "state": "completed", 00:19:34.790 "digest": "sha256", 00:19:34.790 "dhgroup": "ffdhe8192" 00:19:34.790 } 00:19:34.790 } 00:19:34.790 ]' 00:19:34.790 18:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.790 18:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.790 18:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.047 18:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:35.047 18:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.047 18:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.047 18:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.047 18:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.304 18:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:19:36.237 18:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.237 18:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.237 18:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.237 18:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.237 18:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.237 18:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.237 18:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:36.237 18:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:36.495 18:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:36.495 18:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.495 18:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:36.495 18:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:36.495 18:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:36.495 18:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.495 18:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.495 18:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.495 18:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.495 18:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.495 18:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.495 18:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.428 00:19:37.428 18:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.428 18:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.428 18:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.687 18:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.687 18:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.687 18:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.687 18:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.687 18:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.687 18:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.687 { 00:19:37.687 "cntlid": 43, 00:19:37.687 "qid": 0, 00:19:37.687 "state": "enabled", 00:19:37.687 "listen_address": { 00:19:37.687 "trtype": "TCP", 00:19:37.687 "adrfam": "IPv4", 00:19:37.687 "traddr": "10.0.0.2", 00:19:37.687 "trsvcid": "4420" 00:19:37.687 }, 00:19:37.687 "peer_address": { 00:19:37.687 "trtype": "TCP", 00:19:37.687 "adrfam": "IPv4", 00:19:37.687 "traddr": "10.0.0.1", 00:19:37.687 "trsvcid": "60214" 00:19:37.687 }, 00:19:37.687 "auth": { 00:19:37.687 "state": "completed", 00:19:37.687 "digest": "sha256", 00:19:37.687 "dhgroup": "ffdhe8192" 00:19:37.687 } 00:19:37.687 } 00:19:37.687 ]' 00:19:37.687 18:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.687 18:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.687 18:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.687 18:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:37.687 18:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.687 18:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.687 18:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.687 18:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.945 18:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:19:38.877 18:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.877 18:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.877 18:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.877 18:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.877 18:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.877 18:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.877 18:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:38.877 18:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:39.477 18:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:39.477 18:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.477 18:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.477 18:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:39.477 18:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:39.477 18:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.477 18:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.477 18:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.477 18:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.477 18:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.477 18:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.477 18:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.409 00:19:40.409 18:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.409 18:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.409 18:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.409 18:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.409 18:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.409 18:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.409 18:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.409 18:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.409 18:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.409 { 00:19:40.409 "cntlid": 45, 00:19:40.409 "qid": 0, 00:19:40.409 "state": "enabled", 00:19:40.409 "listen_address": { 00:19:40.409 "trtype": "TCP", 00:19:40.409 "adrfam": "IPv4", 00:19:40.409 "traddr": "10.0.0.2", 00:19:40.409 "trsvcid": "4420" 00:19:40.410 }, 00:19:40.410 "peer_address": { 00:19:40.410 "trtype": "TCP", 00:19:40.410 "adrfam": "IPv4", 00:19:40.410 "traddr": "10.0.0.1", 00:19:40.410 "trsvcid": "60246" 00:19:40.410 }, 00:19:40.410 "auth": { 00:19:40.410 "state": "completed", 00:19:40.410 "digest": "sha256", 00:19:40.410 "dhgroup": "ffdhe8192" 00:19:40.410 } 00:19:40.410 } 00:19:40.410 ]' 00:19:40.410 18:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.410 18:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.410 18:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.668 18:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.668 18:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.668 18:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.668 18:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.668 18:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.926 18:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:19:41.862 18:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.862 18:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.862 18:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.862 18:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.862 18:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.862 18:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.862 18:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:41.862 18:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:42.120 18:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:42.120 18:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.120 18:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.120 18:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:42.120 18:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:42.120 18:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.120 18:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:42.121 18:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.121 18:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.121 18:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.121 18:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.121 18:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.054 00:19:43.054 18:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.054 18:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.054 18:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.054 18:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.054 18:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.054 18:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.054 18:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.054 18:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.054 18:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.054 { 00:19:43.054 "cntlid": 47, 00:19:43.054 "qid": 0, 00:19:43.054 "state": "enabled", 00:19:43.054 "listen_address": { 00:19:43.054 "trtype": "TCP", 00:19:43.054 "adrfam": "IPv4", 00:19:43.054 "traddr": "10.0.0.2", 00:19:43.054 "trsvcid": "4420" 00:19:43.054 }, 00:19:43.054 "peer_address": { 00:19:43.054 "trtype": "TCP", 00:19:43.054 "adrfam": "IPv4", 00:19:43.054 "traddr": "10.0.0.1", 00:19:43.054 "trsvcid": "60260" 00:19:43.054 }, 00:19:43.054 "auth": { 00:19:43.054 "state": "completed", 00:19:43.054 "digest": "sha256", 00:19:43.055 "dhgroup": "ffdhe8192" 00:19:43.055 } 00:19:43.055 } 00:19:43.055 ]' 00:19:43.055 18:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.312 18:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.312 18:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.312 18:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.312 18:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.312 18:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.312 18:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.312 18:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.569 18:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:19:44.502 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.502 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.502 18:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.502 18:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.502 18:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.502 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:44.502 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.502 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.502 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:44.502 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:44.760 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:44.760 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.760 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:44.760 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:44.760 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:44.760 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.760 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.760 18:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.760 18:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.760 18:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.760 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.760 18:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.018 00:19:45.018 18:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.018 18:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.018 18:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.275 18:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.275 18:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.275 18:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.275 18:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.547 18:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.547 18:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.547 { 00:19:45.547 "cntlid": 49, 00:19:45.547 "qid": 0, 00:19:45.547 "state": "enabled", 00:19:45.547 "listen_address": { 00:19:45.547 "trtype": "TCP", 00:19:45.547 "adrfam": "IPv4", 00:19:45.547 "traddr": "10.0.0.2", 00:19:45.547 "trsvcid": "4420" 00:19:45.547 }, 00:19:45.547 "peer_address": { 00:19:45.547 "trtype": "TCP", 00:19:45.547 "adrfam": "IPv4", 00:19:45.547 "traddr": "10.0.0.1", 00:19:45.547 "trsvcid": "39450" 00:19:45.547 }, 00:19:45.547 "auth": { 00:19:45.547 "state": "completed", 00:19:45.547 "digest": "sha384", 00:19:45.547 "dhgroup": "null" 00:19:45.547 } 00:19:45.547 } 00:19:45.547 ]' 00:19:45.547 18:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.547 18:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.547 18:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.547 18:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:45.547 18:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.547 18:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.547 18:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.547 18:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.805 18:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:19:46.737 18:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.737 18:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.737 18:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.737 18:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.737 18:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.737 18:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.737 18:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:46.737 18:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:46.995 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:46.995 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.995 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.995 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:46.995 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:46.995 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.995 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.995 18:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.995 18:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.995 18:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.995 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.995 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.253 00:19:47.253 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.253 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.253 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.511 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.511 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.511 18:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.511 18:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.511 18:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.511 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.511 { 00:19:47.511 "cntlid": 51, 00:19:47.511 "qid": 0, 00:19:47.511 "state": "enabled", 00:19:47.511 "listen_address": { 00:19:47.511 "trtype": "TCP", 00:19:47.511 "adrfam": "IPv4", 00:19:47.511 "traddr": "10.0.0.2", 00:19:47.511 "trsvcid": "4420" 00:19:47.511 }, 00:19:47.511 "peer_address": { 00:19:47.511 "trtype": "TCP", 00:19:47.511 "adrfam": "IPv4", 00:19:47.511 "traddr": "10.0.0.1", 00:19:47.511 "trsvcid": "39472" 00:19:47.511 }, 00:19:47.511 "auth": { 00:19:47.511 "state": "completed", 00:19:47.511 "digest": "sha384", 00:19:47.511 "dhgroup": "null" 00:19:47.511 } 00:19:47.511 } 00:19:47.511 ]' 00:19:47.511 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.511 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.511 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.511 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:47.511 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.769 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.769 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.769 18:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.027 18:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:19:48.962 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.962 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.962 18:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.962 18:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.962 18:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.962 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.962 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:48.962 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:49.221 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:49.221 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.221 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:49.221 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:49.221 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:49.221 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.221 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.221 18:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.221 18:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.221 18:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.221 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.221 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.480 00:19:49.480 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.480 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.480 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.738 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.738 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.738 18:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.738 18:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.738 18:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.738 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.738 { 00:19:49.738 "cntlid": 53, 00:19:49.738 "qid": 0, 00:19:49.738 "state": "enabled", 00:19:49.738 "listen_address": { 00:19:49.738 "trtype": "TCP", 00:19:49.738 "adrfam": "IPv4", 00:19:49.738 "traddr": "10.0.0.2", 00:19:49.738 "trsvcid": "4420" 00:19:49.738 }, 00:19:49.738 "peer_address": { 00:19:49.738 "trtype": "TCP", 00:19:49.738 "adrfam": "IPv4", 00:19:49.738 "traddr": "10.0.0.1", 00:19:49.738 "trsvcid": "39518" 00:19:49.738 }, 00:19:49.738 "auth": { 00:19:49.738 "state": "completed", 00:19:49.738 "digest": "sha384", 00:19:49.738 "dhgroup": "null" 00:19:49.738 } 00:19:49.738 } 00:19:49.738 ]' 00:19:49.738 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.738 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.738 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.738 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:49.738 18:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.738 18:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.738 18:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.738 18:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.996 18:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:19:50.928 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.928 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.928 18:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.928 18:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.928 18:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.928 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.928 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.928 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:51.186 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:51.186 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.186 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:51.186 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:51.186 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:51.186 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.186 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:51.186 18:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.186 18:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.186 18:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.186 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.186 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.752 00:19:51.752 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.752 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.752 18:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.752 18:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.752 18:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.752 18:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.752 18:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.752 18:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.752 18:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.752 { 00:19:51.752 "cntlid": 55, 00:19:51.752 "qid": 0, 00:19:51.752 "state": "enabled", 00:19:51.752 "listen_address": { 00:19:51.752 "trtype": "TCP", 00:19:51.752 "adrfam": "IPv4", 00:19:51.752 "traddr": "10.0.0.2", 00:19:51.752 "trsvcid": "4420" 00:19:51.752 }, 00:19:51.752 "peer_address": { 00:19:51.752 "trtype": "TCP", 00:19:51.752 "adrfam": "IPv4", 00:19:51.752 "traddr": "10.0.0.1", 00:19:51.752 "trsvcid": "39550" 00:19:51.752 }, 00:19:51.752 "auth": { 00:19:51.752 "state": "completed", 00:19:51.752 "digest": "sha384", 00:19:51.752 "dhgroup": "null" 00:19:51.752 } 00:19:51.752 } 00:19:51.752 ]' 00:19:51.752 18:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.017 18:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.017 18:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.017 18:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:52.017 18:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.017 18:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.017 18:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.017 18:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.303 18:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:19:53.235 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.235 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.235 18:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.235 18:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.235 18:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.235 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.235 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.235 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:53.235 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:53.492 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:53.492 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.492 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:53.492 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:53.492 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:53.492 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.492 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.492 18:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.492 18:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.492 18:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.492 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.492 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.750 00:19:53.750 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.750 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.750 18:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.007 18:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.007 18:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.007 18:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.007 18:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.007 18:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.007 18:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.007 { 00:19:54.007 "cntlid": 57, 00:19:54.007 "qid": 0, 00:19:54.007 "state": "enabled", 00:19:54.007 "listen_address": { 00:19:54.007 "trtype": "TCP", 00:19:54.007 "adrfam": "IPv4", 00:19:54.007 "traddr": "10.0.0.2", 00:19:54.007 "trsvcid": "4420" 00:19:54.007 }, 00:19:54.007 "peer_address": { 00:19:54.007 "trtype": "TCP", 00:19:54.007 "adrfam": "IPv4", 00:19:54.007 "traddr": "10.0.0.1", 00:19:54.007 "trsvcid": "39572" 00:19:54.007 }, 00:19:54.007 "auth": { 00:19:54.007 "state": "completed", 00:19:54.007 "digest": "sha384", 00:19:54.007 "dhgroup": "ffdhe2048" 00:19:54.007 } 00:19:54.007 } 00:19:54.007 ]' 00:19:54.007 18:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.007 18:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.007 18:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.007 18:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:54.007 18:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.264 18:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.264 18:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.264 18:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.521 18:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:19:55.462 18:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.463 18:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.463 18:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.463 18:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.463 18:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.463 18:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.463 18:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:55.463 18:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:55.721 18:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:55.721 18:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.721 18:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:55.721 18:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:55.721 18:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:55.721 18:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.721 18:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.721 18:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.721 18:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.721 18:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.721 18:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.721 18:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.979 00:19:55.979 18:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.979 18:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.979 18:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.237 18:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.237 18:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.237 18:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.237 18:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.237 18:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.237 18:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.237 { 00:19:56.237 "cntlid": 59, 00:19:56.237 "qid": 0, 00:19:56.237 "state": "enabled", 00:19:56.237 "listen_address": { 00:19:56.237 "trtype": "TCP", 00:19:56.237 "adrfam": "IPv4", 00:19:56.237 "traddr": "10.0.0.2", 00:19:56.237 "trsvcid": "4420" 00:19:56.237 }, 00:19:56.237 "peer_address": { 00:19:56.237 "trtype": "TCP", 00:19:56.237 "adrfam": "IPv4", 00:19:56.237 "traddr": "10.0.0.1", 00:19:56.237 "trsvcid": "38764" 00:19:56.237 }, 00:19:56.237 "auth": { 00:19:56.237 "state": "completed", 00:19:56.237 "digest": "sha384", 00:19:56.237 "dhgroup": "ffdhe2048" 00:19:56.237 } 00:19:56.237 } 00:19:56.237 ]' 00:19:56.237 18:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.237 18:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.237 18:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.495 18:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.495 18:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.495 18:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.495 18:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.495 18:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.754 18:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:19:57.687 18:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.688 18:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.688 18:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.688 18:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.688 18:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.688 18:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.688 18:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:57.688 18:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:57.945 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:57.945 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.945 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:57.945 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:57.945 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:57.945 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.945 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.945 18:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.945 18:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.945 18:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.945 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.945 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.204 00:19:58.204 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.204 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.204 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.464 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.464 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.464 18:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.464 18:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.464 18:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.464 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.464 { 00:19:58.464 "cntlid": 61, 00:19:58.464 "qid": 0, 00:19:58.464 "state": "enabled", 00:19:58.464 "listen_address": { 00:19:58.464 "trtype": "TCP", 00:19:58.464 "adrfam": "IPv4", 00:19:58.464 "traddr": "10.0.0.2", 00:19:58.464 "trsvcid": "4420" 00:19:58.464 }, 00:19:58.464 "peer_address": { 00:19:58.464 "trtype": "TCP", 00:19:58.464 "adrfam": "IPv4", 00:19:58.464 "traddr": "10.0.0.1", 00:19:58.464 "trsvcid": "38784" 00:19:58.464 }, 00:19:58.464 "auth": { 00:19:58.464 "state": "completed", 00:19:58.464 "digest": "sha384", 00:19:58.464 "dhgroup": "ffdhe2048" 00:19:58.464 } 00:19:58.464 } 00:19:58.464 ]' 00:19:58.464 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.464 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.464 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.722 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.722 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.722 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.722 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.722 18:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.981 18:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:19:59.915 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.915 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.915 18:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.915 18:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.915 18:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.915 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.915 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:59.915 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:00.174 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:00.174 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.174 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:00.174 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:00.174 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:00.174 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.174 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:00.174 18:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.174 18:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.174 18:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.174 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.174 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.431 00:20:00.431 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.431 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.431 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.687 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.687 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.687 18:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.687 18:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.687 18:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.687 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.687 { 00:20:00.687 "cntlid": 63, 00:20:00.687 "qid": 0, 00:20:00.687 "state": "enabled", 00:20:00.687 "listen_address": { 00:20:00.687 "trtype": "TCP", 00:20:00.687 "adrfam": "IPv4", 00:20:00.687 "traddr": "10.0.0.2", 00:20:00.687 "trsvcid": "4420" 00:20:00.687 }, 00:20:00.687 "peer_address": { 00:20:00.687 "trtype": "TCP", 00:20:00.687 "adrfam": "IPv4", 00:20:00.687 "traddr": "10.0.0.1", 00:20:00.687 "trsvcid": "38796" 00:20:00.687 }, 00:20:00.687 "auth": { 00:20:00.687 "state": "completed", 00:20:00.687 "digest": "sha384", 00:20:00.687 "dhgroup": "ffdhe2048" 00:20:00.687 } 00:20:00.687 } 00:20:00.687 ]' 00:20:00.687 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.687 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.687 18:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.944 18:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.944 18:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.944 18:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.944 18:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.944 18:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.202 18:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:20:02.135 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.135 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.135 18:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.135 18:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.135 18:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.135 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.135 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.135 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.135 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.393 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:02.393 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.393 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:02.393 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:02.393 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:02.393 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.393 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.393 18:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.393 18:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.393 18:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.393 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.393 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.650 00:20:02.650 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.650 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.650 18:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.907 18:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.907 18:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.907 18:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.907 18:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.907 18:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.907 18:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.907 { 00:20:02.907 "cntlid": 65, 00:20:02.907 "qid": 0, 00:20:02.907 "state": "enabled", 00:20:02.907 "listen_address": { 00:20:02.907 "trtype": "TCP", 00:20:02.907 "adrfam": "IPv4", 00:20:02.907 "traddr": "10.0.0.2", 00:20:02.907 "trsvcid": "4420" 00:20:02.907 }, 00:20:02.907 "peer_address": { 00:20:02.907 "trtype": "TCP", 00:20:02.907 "adrfam": "IPv4", 00:20:02.907 "traddr": "10.0.0.1", 00:20:02.907 "trsvcid": "38832" 00:20:02.907 }, 00:20:02.907 "auth": { 00:20:02.907 "state": "completed", 00:20:02.907 "digest": "sha384", 00:20:02.907 "dhgroup": "ffdhe3072" 00:20:02.907 } 00:20:02.907 } 00:20:02.907 ]' 00:20:02.907 18:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.163 18:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.163 18:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.163 18:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.163 18:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.163 18:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.163 18:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.163 18:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.419 18:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:20:04.351 18:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.351 18:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.351 18:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.351 18:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.351 18:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.351 18:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.351 18:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:04.351 18:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:04.615 18:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:04.615 18:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.615 18:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:04.615 18:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:04.615 18:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:04.615 18:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.615 18:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.615 18:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.615 18:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.615 18:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.615 18:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.615 18:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.872 00:20:04.872 18:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.872 18:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.872 18:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.129 18:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.129 18:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.129 18:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.129 18:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.129 18:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.129 18:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.129 { 00:20:05.129 "cntlid": 67, 00:20:05.129 "qid": 0, 00:20:05.129 "state": "enabled", 00:20:05.129 "listen_address": { 00:20:05.129 "trtype": "TCP", 00:20:05.129 "adrfam": "IPv4", 00:20:05.129 "traddr": "10.0.0.2", 00:20:05.129 "trsvcid": "4420" 00:20:05.129 }, 00:20:05.129 "peer_address": { 00:20:05.129 "trtype": "TCP", 00:20:05.129 "adrfam": "IPv4", 00:20:05.129 "traddr": "10.0.0.1", 00:20:05.129 "trsvcid": "48214" 00:20:05.129 }, 00:20:05.129 "auth": { 00:20:05.129 "state": "completed", 00:20:05.129 "digest": "sha384", 00:20:05.129 "dhgroup": "ffdhe3072" 00:20:05.129 } 00:20:05.129 } 00:20:05.129 ]' 00:20:05.129 18:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.129 18:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.129 18:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.386 18:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.386 18:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.386 18:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.386 18:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.386 18:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.644 18:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:20:06.575 18:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.575 18:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.575 18:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.575 18:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.575 18:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.575 18:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.575 18:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:06.575 18:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:06.833 18:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:06.833 18:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.833 18:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.833 18:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:06.833 18:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:06.833 18:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.833 18:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.833 18:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.833 18:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.833 18:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.833 18:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.833 18:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.091 00:20:07.091 18:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.091 18:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.091 18:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.348 18:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.348 18:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.348 18:50:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.348 18:50:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.348 18:50:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.348 18:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.348 { 00:20:07.348 "cntlid": 69, 00:20:07.348 "qid": 0, 00:20:07.348 "state": "enabled", 00:20:07.348 "listen_address": { 00:20:07.348 "trtype": "TCP", 00:20:07.348 "adrfam": "IPv4", 00:20:07.348 "traddr": "10.0.0.2", 00:20:07.348 "trsvcid": "4420" 00:20:07.348 }, 00:20:07.348 "peer_address": { 00:20:07.348 "trtype": "TCP", 00:20:07.348 "adrfam": "IPv4", 00:20:07.348 "traddr": "10.0.0.1", 00:20:07.348 "trsvcid": "48248" 00:20:07.348 }, 00:20:07.348 "auth": { 00:20:07.348 "state": "completed", 00:20:07.348 "digest": "sha384", 00:20:07.348 "dhgroup": "ffdhe3072" 00:20:07.348 } 00:20:07.348 } 00:20:07.348 ]' 00:20:07.349 18:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.349 18:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.349 18:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.349 18:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:07.349 18:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.606 18:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.606 18:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.606 18:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.863 18:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:20:08.797 18:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.797 18:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.797 18:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.797 18:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.797 18:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.797 18:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.797 18:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:08.797 18:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.055 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:09.055 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.055 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.055 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:09.055 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:09.055 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.055 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:09.055 18:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.055 18:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.055 18:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.055 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.055 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.313 00:20:09.313 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.313 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.313 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.571 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.571 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.571 18:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.571 18:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.571 18:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.571 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.571 { 00:20:09.571 "cntlid": 71, 00:20:09.571 "qid": 0, 00:20:09.571 "state": "enabled", 00:20:09.571 "listen_address": { 00:20:09.571 "trtype": "TCP", 00:20:09.571 "adrfam": "IPv4", 00:20:09.571 "traddr": "10.0.0.2", 00:20:09.571 "trsvcid": "4420" 00:20:09.571 }, 00:20:09.571 "peer_address": { 00:20:09.571 "trtype": "TCP", 00:20:09.571 "adrfam": "IPv4", 00:20:09.571 "traddr": "10.0.0.1", 00:20:09.571 "trsvcid": "48270" 00:20:09.571 }, 00:20:09.571 "auth": { 00:20:09.571 "state": "completed", 00:20:09.571 "digest": "sha384", 00:20:09.571 "dhgroup": "ffdhe3072" 00:20:09.571 } 00:20:09.571 } 00:20:09.571 ]' 00:20:09.571 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.571 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.571 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.830 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:09.830 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.830 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.830 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.830 18:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.088 18:50:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:20:11.022 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.022 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.022 18:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.022 18:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.022 18:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.022 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.022 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.022 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:11.022 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:11.279 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:11.279 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.279 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.279 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:11.279 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:11.279 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.279 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.279 18:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.279 18:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.279 18:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.279 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.279 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.841 00:20:11.841 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.841 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.841 18:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.841 18:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.841 18:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.841 18:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.841 18:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.841 18:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.841 18:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.841 { 00:20:11.841 "cntlid": 73, 00:20:11.841 "qid": 0, 00:20:11.841 "state": "enabled", 00:20:11.841 "listen_address": { 00:20:11.841 "trtype": "TCP", 00:20:11.841 "adrfam": "IPv4", 00:20:11.841 "traddr": "10.0.0.2", 00:20:11.841 "trsvcid": "4420" 00:20:11.841 }, 00:20:11.841 "peer_address": { 00:20:11.841 "trtype": "TCP", 00:20:11.841 "adrfam": "IPv4", 00:20:11.841 "traddr": "10.0.0.1", 00:20:11.841 "trsvcid": "48290" 00:20:11.841 }, 00:20:11.841 "auth": { 00:20:11.841 "state": "completed", 00:20:11.841 "digest": "sha384", 00:20:11.841 "dhgroup": "ffdhe4096" 00:20:11.841 } 00:20:11.841 } 00:20:11.841 ]' 00:20:11.841 18:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.098 18:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.098 18:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.098 18:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:12.098 18:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.098 18:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.098 18:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.098 18:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.354 18:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:20:13.284 18:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.284 18:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.284 18:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.284 18:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.284 18:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.284 18:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.284 18:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:13.284 18:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:13.540 18:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:13.540 18:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.540 18:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:13.540 18:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:13.540 18:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:13.540 18:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.540 18:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.540 18:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.540 18:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.540 18:50:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.540 18:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.540 18:50:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.797 00:20:13.797 18:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.797 18:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.797 18:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.054 18:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.054 18:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.054 18:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.054 18:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.054 18:50:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.054 18:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.054 { 00:20:14.054 "cntlid": 75, 00:20:14.054 "qid": 0, 00:20:14.054 "state": "enabled", 00:20:14.054 "listen_address": { 00:20:14.054 "trtype": "TCP", 00:20:14.054 "adrfam": "IPv4", 00:20:14.054 "traddr": "10.0.0.2", 00:20:14.054 "trsvcid": "4420" 00:20:14.054 }, 00:20:14.054 "peer_address": { 00:20:14.054 "trtype": "TCP", 00:20:14.054 "adrfam": "IPv4", 00:20:14.054 "traddr": "10.0.0.1", 00:20:14.054 "trsvcid": "48328" 00:20:14.054 }, 00:20:14.054 "auth": { 00:20:14.054 "state": "completed", 00:20:14.054 "digest": "sha384", 00:20:14.054 "dhgroup": "ffdhe4096" 00:20:14.054 } 00:20:14.054 } 00:20:14.054 ]' 00:20:14.054 18:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.054 18:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.311 18:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.311 18:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:14.311 18:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.311 18:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.311 18:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.311 18:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.567 18:50:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:20:15.496 18:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.496 18:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.496 18:50:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.496 18:50:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.496 18:50:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.496 18:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.496 18:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:15.496 18:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:15.753 18:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:15.753 18:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.753 18:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.753 18:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:15.753 18:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:15.753 18:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.753 18:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.753 18:50:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.753 18:50:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.753 18:50:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.753 18:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.753 18:50:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.318 00:20:16.318 18:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.318 18:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.318 18:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.576 18:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.576 18:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.576 18:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.576 18:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.576 18:50:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.576 18:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.576 { 00:20:16.576 "cntlid": 77, 00:20:16.576 "qid": 0, 00:20:16.576 "state": "enabled", 00:20:16.576 "listen_address": { 00:20:16.576 "trtype": "TCP", 00:20:16.576 "adrfam": "IPv4", 00:20:16.576 "traddr": "10.0.0.2", 00:20:16.576 "trsvcid": "4420" 00:20:16.576 }, 00:20:16.576 "peer_address": { 00:20:16.576 "trtype": "TCP", 00:20:16.576 "adrfam": "IPv4", 00:20:16.576 "traddr": "10.0.0.1", 00:20:16.576 "trsvcid": "44648" 00:20:16.576 }, 00:20:16.576 "auth": { 00:20:16.576 "state": "completed", 00:20:16.576 "digest": "sha384", 00:20:16.576 "dhgroup": "ffdhe4096" 00:20:16.576 } 00:20:16.576 } 00:20:16.576 ]' 00:20:16.576 18:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.576 18:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.576 18:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.576 18:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:16.576 18:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.576 18:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.576 18:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.576 18:50:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.868 18:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:20:17.801 18:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.801 18:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.801 18:50:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.801 18:50:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.801 18:50:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.801 18:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.801 18:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:17.801 18:50:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:18.060 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:18.060 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.060 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.060 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:18.060 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:18.060 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.060 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:18.060 18:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.060 18:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.060 18:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.060 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.060 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.623 00:20:18.623 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.623 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.624 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.880 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.880 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.880 18:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.880 18:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.880 18:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.880 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.880 { 00:20:18.880 "cntlid": 79, 00:20:18.880 "qid": 0, 00:20:18.880 "state": "enabled", 00:20:18.880 "listen_address": { 00:20:18.880 "trtype": "TCP", 00:20:18.880 "adrfam": "IPv4", 00:20:18.880 "traddr": "10.0.0.2", 00:20:18.880 "trsvcid": "4420" 00:20:18.880 }, 00:20:18.880 "peer_address": { 00:20:18.880 "trtype": "TCP", 00:20:18.880 "adrfam": "IPv4", 00:20:18.880 "traddr": "10.0.0.1", 00:20:18.880 "trsvcid": "44674" 00:20:18.880 }, 00:20:18.880 "auth": { 00:20:18.880 "state": "completed", 00:20:18.880 "digest": "sha384", 00:20:18.880 "dhgroup": "ffdhe4096" 00:20:18.880 } 00:20:18.880 } 00:20:18.880 ]' 00:20:18.880 18:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.880 18:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.880 18:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.880 18:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:18.880 18:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.880 18:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.880 18:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.880 18:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.136 18:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:20:20.066 18:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.066 18:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.066 18:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.066 18:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.066 18:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.066 18:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.066 18:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.066 18:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:20.066 18:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:20.322 18:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:20.322 18:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.322 18:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.322 18:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:20.322 18:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:20.322 18:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.322 18:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.322 18:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.322 18:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.323 18:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.323 18:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.323 18:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.887 00:20:20.887 18:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.887 18:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.887 18:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.144 18:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.144 18:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.144 18:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.144 18:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.144 18:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.144 18:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.144 { 00:20:21.144 "cntlid": 81, 00:20:21.144 "qid": 0, 00:20:21.144 "state": "enabled", 00:20:21.144 "listen_address": { 00:20:21.144 "trtype": "TCP", 00:20:21.144 "adrfam": "IPv4", 00:20:21.144 "traddr": "10.0.0.2", 00:20:21.144 "trsvcid": "4420" 00:20:21.144 }, 00:20:21.144 "peer_address": { 00:20:21.144 "trtype": "TCP", 00:20:21.144 "adrfam": "IPv4", 00:20:21.144 "traddr": "10.0.0.1", 00:20:21.144 "trsvcid": "44700" 00:20:21.144 }, 00:20:21.144 "auth": { 00:20:21.144 "state": "completed", 00:20:21.144 "digest": "sha384", 00:20:21.144 "dhgroup": "ffdhe6144" 00:20:21.144 } 00:20:21.144 } 00:20:21.144 ]' 00:20:21.144 18:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.144 18:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.144 18:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.419 18:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.419 18:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.419 18:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.419 18:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.419 18:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.676 18:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:20:22.606 18:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.606 18:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.606 18:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.606 18:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.606 18:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.606 18:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.606 18:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:22.606 18:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:22.863 18:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:22.863 18:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.863 18:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:22.863 18:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:22.863 18:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:22.863 18:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.863 18:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.863 18:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.863 18:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.863 18:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.863 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.863 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.427 00:20:23.427 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.428 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.428 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.684 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.684 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.684 18:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.684 18:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.684 18:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.684 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.684 { 00:20:23.684 "cntlid": 83, 00:20:23.684 "qid": 0, 00:20:23.684 "state": "enabled", 00:20:23.684 "listen_address": { 00:20:23.684 "trtype": "TCP", 00:20:23.684 "adrfam": "IPv4", 00:20:23.684 "traddr": "10.0.0.2", 00:20:23.684 "trsvcid": "4420" 00:20:23.684 }, 00:20:23.684 "peer_address": { 00:20:23.684 "trtype": "TCP", 00:20:23.684 "adrfam": "IPv4", 00:20:23.684 "traddr": "10.0.0.1", 00:20:23.684 "trsvcid": "44724" 00:20:23.684 }, 00:20:23.684 "auth": { 00:20:23.684 "state": "completed", 00:20:23.684 "digest": "sha384", 00:20:23.684 "dhgroup": "ffdhe6144" 00:20:23.684 } 00:20:23.684 } 00:20:23.684 ]' 00:20:23.684 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.684 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.684 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.684 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:23.684 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.684 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.684 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.684 18:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.940 18:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:20:24.870 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.870 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.870 18:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.870 18:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.870 18:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.870 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.870 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:24.870 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:25.127 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:25.127 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.127 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.127 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:25.127 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:25.127 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.127 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.127 18:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.127 18:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.127 18:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.127 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.127 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.691 00:20:25.691 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.691 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.691 18:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.949 18:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.949 18:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.949 18:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.949 18:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.949 18:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.949 18:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.949 { 00:20:25.949 "cntlid": 85, 00:20:25.949 "qid": 0, 00:20:25.949 "state": "enabled", 00:20:25.949 "listen_address": { 00:20:25.949 "trtype": "TCP", 00:20:25.949 "adrfam": "IPv4", 00:20:25.949 "traddr": "10.0.0.2", 00:20:25.949 "trsvcid": "4420" 00:20:25.949 }, 00:20:25.949 "peer_address": { 00:20:25.949 "trtype": "TCP", 00:20:25.949 "adrfam": "IPv4", 00:20:25.949 "traddr": "10.0.0.1", 00:20:25.949 "trsvcid": "59638" 00:20:25.949 }, 00:20:25.949 "auth": { 00:20:25.949 "state": "completed", 00:20:25.949 "digest": "sha384", 00:20:25.949 "dhgroup": "ffdhe6144" 00:20:25.949 } 00:20:25.949 } 00:20:25.949 ]' 00:20:25.949 18:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.949 18:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.949 18:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.207 18:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:26.207 18:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.207 18:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.207 18:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.207 18:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.465 18:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:20:27.397 18:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.397 18:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.397 18:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.397 18:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.397 18:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.397 18:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.397 18:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:27.397 18:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:27.654 18:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:27.654 18:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.654 18:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.654 18:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:27.654 18:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:27.654 18:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.654 18:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:27.654 18:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.654 18:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.654 18:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.654 18:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.654 18:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.220 00:20:28.220 18:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.220 18:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.220 18:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.478 18:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.478 18:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.478 18:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.478 18:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.478 18:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.478 18:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.478 { 00:20:28.478 "cntlid": 87, 00:20:28.478 "qid": 0, 00:20:28.478 "state": "enabled", 00:20:28.478 "listen_address": { 00:20:28.478 "trtype": "TCP", 00:20:28.478 "adrfam": "IPv4", 00:20:28.478 "traddr": "10.0.0.2", 00:20:28.478 "trsvcid": "4420" 00:20:28.478 }, 00:20:28.478 "peer_address": { 00:20:28.478 "trtype": "TCP", 00:20:28.478 "adrfam": "IPv4", 00:20:28.478 "traddr": "10.0.0.1", 00:20:28.478 "trsvcid": "59658" 00:20:28.478 }, 00:20:28.478 "auth": { 00:20:28.478 "state": "completed", 00:20:28.478 "digest": "sha384", 00:20:28.478 "dhgroup": "ffdhe6144" 00:20:28.478 } 00:20:28.478 } 00:20:28.478 ]' 00:20:28.478 18:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.478 18:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.478 18:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.478 18:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:28.478 18:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.478 18:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.478 18:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.478 18:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.736 18:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:20:29.669 18:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.669 18:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.669 18:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.669 18:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.669 18:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.669 18:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.669 18:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.669 18:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:29.669 18:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:29.928 18:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:29.928 18:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.928 18:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.928 18:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:29.928 18:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:29.928 18:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.928 18:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.928 18:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.928 18:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.928 18:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.928 18:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.928 18:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.861 00:20:30.861 18:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.861 18:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.861 18:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.118 18:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.118 18:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.118 18:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.118 18:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.118 18:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.118 18:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.118 { 00:20:31.118 "cntlid": 89, 00:20:31.118 "qid": 0, 00:20:31.118 "state": "enabled", 00:20:31.118 "listen_address": { 00:20:31.118 "trtype": "TCP", 00:20:31.118 "adrfam": "IPv4", 00:20:31.118 "traddr": "10.0.0.2", 00:20:31.118 "trsvcid": "4420" 00:20:31.118 }, 00:20:31.118 "peer_address": { 00:20:31.118 "trtype": "TCP", 00:20:31.118 "adrfam": "IPv4", 00:20:31.118 "traddr": "10.0.0.1", 00:20:31.118 "trsvcid": "59692" 00:20:31.118 }, 00:20:31.118 "auth": { 00:20:31.118 "state": "completed", 00:20:31.118 "digest": "sha384", 00:20:31.118 "dhgroup": "ffdhe8192" 00:20:31.118 } 00:20:31.118 } 00:20:31.118 ]' 00:20:31.118 18:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.118 18:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.118 18:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.118 18:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:31.118 18:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.118 18:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.118 18:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.118 18:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.376 18:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:20:32.308 18:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.308 18:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.308 18:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.308 18:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.308 18:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.308 18:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.308 18:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:32.308 18:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:32.871 18:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:32.871 18:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.871 18:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.871 18:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:32.871 18:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:32.871 18:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.871 18:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.871 18:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.871 18:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.871 18:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.871 18:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.871 18:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.801 00:20:33.801 18:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.801 18:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.801 18:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.801 18:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.801 18:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.801 18:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.801 18:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.801 18:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.801 18:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.801 { 00:20:33.801 "cntlid": 91, 00:20:33.801 "qid": 0, 00:20:33.801 "state": "enabled", 00:20:33.801 "listen_address": { 00:20:33.801 "trtype": "TCP", 00:20:33.801 "adrfam": "IPv4", 00:20:33.801 "traddr": "10.0.0.2", 00:20:33.801 "trsvcid": "4420" 00:20:33.801 }, 00:20:33.801 "peer_address": { 00:20:33.801 "trtype": "TCP", 00:20:33.801 "adrfam": "IPv4", 00:20:33.801 "traddr": "10.0.0.1", 00:20:33.801 "trsvcid": "59720" 00:20:33.801 }, 00:20:33.801 "auth": { 00:20:33.801 "state": "completed", 00:20:33.801 "digest": "sha384", 00:20:33.801 "dhgroup": "ffdhe8192" 00:20:33.801 } 00:20:33.801 } 00:20:33.801 ]' 00:20:33.801 18:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.058 18:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.058 18:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.058 18:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:34.058 18:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.058 18:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.058 18:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.058 18:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.322 18:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:20:35.253 18:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.253 18:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.253 18:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.253 18:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.253 18:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.253 18:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.253 18:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:35.253 18:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:35.510 18:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:35.510 18:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.510 18:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:35.510 18:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:35.510 18:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:35.510 18:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.510 18:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.510 18:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.510 18:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.510 18:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.510 18:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.510 18:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.440 00:20:36.440 18:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.441 18:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.441 18:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.698 18:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.698 18:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.698 18:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.698 18:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.698 18:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.698 18:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.698 { 00:20:36.698 "cntlid": 93, 00:20:36.698 "qid": 0, 00:20:36.698 "state": "enabled", 00:20:36.698 "listen_address": { 00:20:36.698 "trtype": "TCP", 00:20:36.698 "adrfam": "IPv4", 00:20:36.698 "traddr": "10.0.0.2", 00:20:36.698 "trsvcid": "4420" 00:20:36.698 }, 00:20:36.698 "peer_address": { 00:20:36.698 "trtype": "TCP", 00:20:36.698 "adrfam": "IPv4", 00:20:36.698 "traddr": "10.0.0.1", 00:20:36.698 "trsvcid": "50752" 00:20:36.698 }, 00:20:36.698 "auth": { 00:20:36.698 "state": "completed", 00:20:36.698 "digest": "sha384", 00:20:36.698 "dhgroup": "ffdhe8192" 00:20:36.698 } 00:20:36.698 } 00:20:36.698 ]' 00:20:36.698 18:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.698 18:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.698 18:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.698 18:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.698 18:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.698 18:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.698 18:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.698 18:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.955 18:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:20:37.888 18:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.888 18:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.888 18:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.888 18:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.888 18:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.888 18:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.888 18:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:37.888 18:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.145 18:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:38.145 18:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.145 18:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.145 18:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:38.145 18:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:38.145 18:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.145 18:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:38.145 18:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.145 18:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.145 18:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.145 18:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.145 18:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.077 00:20:39.077 18:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.077 18:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.077 18:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.335 18:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.335 18:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.335 18:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.335 18:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.335 18:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.335 18:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.336 { 00:20:39.336 "cntlid": 95, 00:20:39.336 "qid": 0, 00:20:39.336 "state": "enabled", 00:20:39.336 "listen_address": { 00:20:39.336 "trtype": "TCP", 00:20:39.336 "adrfam": "IPv4", 00:20:39.336 "traddr": "10.0.0.2", 00:20:39.336 "trsvcid": "4420" 00:20:39.336 }, 00:20:39.336 "peer_address": { 00:20:39.336 "trtype": "TCP", 00:20:39.336 "adrfam": "IPv4", 00:20:39.336 "traddr": "10.0.0.1", 00:20:39.336 "trsvcid": "50782" 00:20:39.336 }, 00:20:39.336 "auth": { 00:20:39.336 "state": "completed", 00:20:39.336 "digest": "sha384", 00:20:39.336 "dhgroup": "ffdhe8192" 00:20:39.336 } 00:20:39.336 } 00:20:39.336 ]' 00:20:39.336 18:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.336 18:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.336 18:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.336 18:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.336 18:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.336 18:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.336 18:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.336 18:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.593 18:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:20:40.524 18:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.524 18:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.524 18:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.524 18:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.524 18:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.524 18:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:40.524 18:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.524 18:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.524 18:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:40.524 18:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:40.781 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:40.781 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.781 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.781 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:40.781 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:40.782 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.782 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.782 18:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.782 18:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.782 18:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.782 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.782 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.039 00:20:41.039 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.039 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.039 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.296 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.296 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.296 18:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.296 18:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.296 18:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.296 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.296 { 00:20:41.296 "cntlid": 97, 00:20:41.296 "qid": 0, 00:20:41.296 "state": "enabled", 00:20:41.296 "listen_address": { 00:20:41.296 "trtype": "TCP", 00:20:41.296 "adrfam": "IPv4", 00:20:41.296 "traddr": "10.0.0.2", 00:20:41.296 "trsvcid": "4420" 00:20:41.296 }, 00:20:41.296 "peer_address": { 00:20:41.296 "trtype": "TCP", 00:20:41.296 "adrfam": "IPv4", 00:20:41.296 "traddr": "10.0.0.1", 00:20:41.296 "trsvcid": "50814" 00:20:41.296 }, 00:20:41.296 "auth": { 00:20:41.296 "state": "completed", 00:20:41.296 "digest": "sha512", 00:20:41.296 "dhgroup": "null" 00:20:41.296 } 00:20:41.296 } 00:20:41.296 ]' 00:20:41.296 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.553 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.553 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.553 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:41.553 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.553 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.553 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.553 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.811 18:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:20:42.757 18:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.757 18:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.757 18:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.757 18:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.757 18:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.757 18:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.757 18:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:42.757 18:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:43.015 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:43.015 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.015 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:43.015 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:43.015 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:43.015 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.015 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.015 18:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.015 18:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.015 18:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.015 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.015 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.272 00:20:43.272 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.272 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.272 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.529 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.530 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.530 18:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.530 18:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.530 18:50:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.530 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.530 { 00:20:43.530 "cntlid": 99, 00:20:43.530 "qid": 0, 00:20:43.530 "state": "enabled", 00:20:43.530 "listen_address": { 00:20:43.530 "trtype": "TCP", 00:20:43.530 "adrfam": "IPv4", 00:20:43.530 "traddr": "10.0.0.2", 00:20:43.530 "trsvcid": "4420" 00:20:43.530 }, 00:20:43.530 "peer_address": { 00:20:43.530 "trtype": "TCP", 00:20:43.530 "adrfam": "IPv4", 00:20:43.530 "traddr": "10.0.0.1", 00:20:43.530 "trsvcid": "50838" 00:20:43.530 }, 00:20:43.530 "auth": { 00:20:43.530 "state": "completed", 00:20:43.530 "digest": "sha512", 00:20:43.530 "dhgroup": "null" 00:20:43.530 } 00:20:43.530 } 00:20:43.530 ]' 00:20:43.530 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.530 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.530 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.530 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:43.530 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.530 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.530 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.530 18:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.787 18:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:20:44.719 18:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.719 18:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.719 18:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.719 18:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.719 18:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.719 18:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.719 18:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:44.719 18:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:44.977 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:44.977 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.977 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:44.977 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:44.977 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:44.977 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.977 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.977 18:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.977 18:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.977 18:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.977 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.977 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.235 00:20:45.235 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.235 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.235 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.493 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.493 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.493 18:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.493 18:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.493 18:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.493 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.493 { 00:20:45.493 "cntlid": 101, 00:20:45.493 "qid": 0, 00:20:45.493 "state": "enabled", 00:20:45.493 "listen_address": { 00:20:45.493 "trtype": "TCP", 00:20:45.493 "adrfam": "IPv4", 00:20:45.493 "traddr": "10.0.0.2", 00:20:45.493 "trsvcid": "4420" 00:20:45.493 }, 00:20:45.493 "peer_address": { 00:20:45.493 "trtype": "TCP", 00:20:45.493 "adrfam": "IPv4", 00:20:45.493 "traddr": "10.0.0.1", 00:20:45.493 "trsvcid": "56118" 00:20:45.493 }, 00:20:45.493 "auth": { 00:20:45.493 "state": "completed", 00:20:45.493 "digest": "sha512", 00:20:45.493 "dhgroup": "null" 00:20:45.493 } 00:20:45.493 } 00:20:45.493 ]' 00:20:45.493 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.750 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.750 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.750 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:45.750 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.750 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.750 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.750 18:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.008 18:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:20:46.940 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.940 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.940 18:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.941 18:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.941 18:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.941 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.941 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:46.941 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:47.198 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:47.198 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.198 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:47.198 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:47.198 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:47.198 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.198 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:47.198 18:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.198 18:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.198 18:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.198 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.198 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.464 00:20:47.464 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.464 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.464 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.722 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.722 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.722 18:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.722 18:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.722 18:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.722 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.722 { 00:20:47.722 "cntlid": 103, 00:20:47.722 "qid": 0, 00:20:47.722 "state": "enabled", 00:20:47.722 "listen_address": { 00:20:47.722 "trtype": "TCP", 00:20:47.722 "adrfam": "IPv4", 00:20:47.722 "traddr": "10.0.0.2", 00:20:47.722 "trsvcid": "4420" 00:20:47.722 }, 00:20:47.722 "peer_address": { 00:20:47.722 "trtype": "TCP", 00:20:47.722 "adrfam": "IPv4", 00:20:47.723 "traddr": "10.0.0.1", 00:20:47.723 "trsvcid": "56130" 00:20:47.723 }, 00:20:47.723 "auth": { 00:20:47.723 "state": "completed", 00:20:47.723 "digest": "sha512", 00:20:47.723 "dhgroup": "null" 00:20:47.723 } 00:20:47.723 } 00:20:47.723 ]' 00:20:47.723 18:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.723 18:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.723 18:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.979 18:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:47.979 18:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.979 18:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.979 18:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.979 18:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.236 18:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:20:49.167 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.167 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.167 18:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.167 18:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.167 18:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.167 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.167 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.167 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:49.167 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:49.424 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:49.424 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.424 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.424 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:49.424 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:49.424 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.424 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.424 18:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.424 18:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.424 18:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.424 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.424 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.681 00:20:49.681 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.681 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.681 18:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.939 18:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.939 18:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.939 18:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.939 18:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.939 18:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.939 18:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.939 { 00:20:49.939 "cntlid": 105, 00:20:49.939 "qid": 0, 00:20:49.939 "state": "enabled", 00:20:49.939 "listen_address": { 00:20:49.939 "trtype": "TCP", 00:20:49.939 "adrfam": "IPv4", 00:20:49.939 "traddr": "10.0.0.2", 00:20:49.939 "trsvcid": "4420" 00:20:49.939 }, 00:20:49.939 "peer_address": { 00:20:49.939 "trtype": "TCP", 00:20:49.939 "adrfam": "IPv4", 00:20:49.939 "traddr": "10.0.0.1", 00:20:49.939 "trsvcid": "56160" 00:20:49.939 }, 00:20:49.939 "auth": { 00:20:49.939 "state": "completed", 00:20:49.939 "digest": "sha512", 00:20:49.939 "dhgroup": "ffdhe2048" 00:20:49.939 } 00:20:49.939 } 00:20:49.939 ]' 00:20:49.939 18:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.939 18:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.939 18:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.195 18:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:50.195 18:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.195 18:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.195 18:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.195 18:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.453 18:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:20:51.385 18:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.385 18:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.385 18:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.385 18:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.385 18:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.385 18:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.386 18:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:51.386 18:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:51.644 18:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:51.644 18:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.644 18:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:51.644 18:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:51.644 18:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:51.644 18:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.644 18:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.644 18:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.644 18:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.644 18:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.644 18:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.644 18:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.902 00:20:51.902 18:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.902 18:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.902 18:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.159 18:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.159 18:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.159 18:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.159 18:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.159 18:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.159 18:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.159 { 00:20:52.159 "cntlid": 107, 00:20:52.159 "qid": 0, 00:20:52.159 "state": "enabled", 00:20:52.159 "listen_address": { 00:20:52.159 "trtype": "TCP", 00:20:52.159 "adrfam": "IPv4", 00:20:52.159 "traddr": "10.0.0.2", 00:20:52.159 "trsvcid": "4420" 00:20:52.159 }, 00:20:52.159 "peer_address": { 00:20:52.159 "trtype": "TCP", 00:20:52.159 "adrfam": "IPv4", 00:20:52.159 "traddr": "10.0.0.1", 00:20:52.159 "trsvcid": "56188" 00:20:52.159 }, 00:20:52.159 "auth": { 00:20:52.159 "state": "completed", 00:20:52.159 "digest": "sha512", 00:20:52.159 "dhgroup": "ffdhe2048" 00:20:52.159 } 00:20:52.159 } 00:20:52.159 ]' 00:20:52.159 18:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.159 18:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.159 18:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.159 18:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.159 18:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.159 18:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.159 18:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.159 18:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.418 18:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:20:53.366 18:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.366 18:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.366 18:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.366 18:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.366 18:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.366 18:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.366 18:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:53.367 18:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:53.624 18:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:53.624 18:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.624 18:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:53.624 18:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:53.624 18:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:53.624 18:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.624 18:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.624 18:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.624 18:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.624 18:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.624 18:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.624 18:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.190 00:20:54.190 18:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.190 18:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.190 18:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.190 18:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.190 18:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.190 18:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.190 18:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.190 18:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.190 18:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.190 { 00:20:54.190 "cntlid": 109, 00:20:54.190 "qid": 0, 00:20:54.190 "state": "enabled", 00:20:54.190 "listen_address": { 00:20:54.190 "trtype": "TCP", 00:20:54.190 "adrfam": "IPv4", 00:20:54.190 "traddr": "10.0.0.2", 00:20:54.190 "trsvcid": "4420" 00:20:54.190 }, 00:20:54.190 "peer_address": { 00:20:54.190 "trtype": "TCP", 00:20:54.190 "adrfam": "IPv4", 00:20:54.190 "traddr": "10.0.0.1", 00:20:54.190 "trsvcid": "44620" 00:20:54.190 }, 00:20:54.190 "auth": { 00:20:54.190 "state": "completed", 00:20:54.190 "digest": "sha512", 00:20:54.190 "dhgroup": "ffdhe2048" 00:20:54.190 } 00:20:54.190 } 00:20:54.190 ]' 00:20:54.190 18:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.448 18:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.448 18:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.448 18:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:54.448 18:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.448 18:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.448 18:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.448 18:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.706 18:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:20:55.639 18:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.639 18:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.639 18:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.639 18:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.639 18:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.639 18:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.639 18:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.639 18:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.897 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:55.897 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.897 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.897 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:55.897 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:55.897 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.897 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:55.897 18:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.897 18:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.897 18:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.897 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.897 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.154 00:20:56.154 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.154 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.154 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.411 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.411 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.411 18:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.411 18:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.411 18:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.411 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.411 { 00:20:56.411 "cntlid": 111, 00:20:56.411 "qid": 0, 00:20:56.411 "state": "enabled", 00:20:56.411 "listen_address": { 00:20:56.411 "trtype": "TCP", 00:20:56.411 "adrfam": "IPv4", 00:20:56.411 "traddr": "10.0.0.2", 00:20:56.411 "trsvcid": "4420" 00:20:56.411 }, 00:20:56.411 "peer_address": { 00:20:56.411 "trtype": "TCP", 00:20:56.411 "adrfam": "IPv4", 00:20:56.411 "traddr": "10.0.0.1", 00:20:56.411 "trsvcid": "44652" 00:20:56.411 }, 00:20:56.411 "auth": { 00:20:56.411 "state": "completed", 00:20:56.411 "digest": "sha512", 00:20:56.411 "dhgroup": "ffdhe2048" 00:20:56.411 } 00:20:56.411 } 00:20:56.411 ]' 00:20:56.411 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.668 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.668 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.668 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:56.668 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.668 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.668 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.668 18:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.925 18:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:20:57.857 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.857 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.857 18:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.857 18:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.857 18:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.857 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.857 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.857 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:57.857 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:58.114 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:58.114 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.114 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:58.114 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:58.114 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:58.114 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.114 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.114 18:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.114 18:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.114 18:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.114 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.114 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.372 00:20:58.372 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.372 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.372 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.629 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.629 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.629 18:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.629 18:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.629 18:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.629 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.629 { 00:20:58.629 "cntlid": 113, 00:20:58.629 "qid": 0, 00:20:58.629 "state": "enabled", 00:20:58.629 "listen_address": { 00:20:58.629 "trtype": "TCP", 00:20:58.629 "adrfam": "IPv4", 00:20:58.629 "traddr": "10.0.0.2", 00:20:58.629 "trsvcid": "4420" 00:20:58.629 }, 00:20:58.629 "peer_address": { 00:20:58.629 "trtype": "TCP", 00:20:58.629 "adrfam": "IPv4", 00:20:58.629 "traddr": "10.0.0.1", 00:20:58.629 "trsvcid": "44668" 00:20:58.629 }, 00:20:58.629 "auth": { 00:20:58.629 "state": "completed", 00:20:58.629 "digest": "sha512", 00:20:58.629 "dhgroup": "ffdhe3072" 00:20:58.629 } 00:20:58.629 } 00:20:58.629 ]' 00:20:58.629 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.629 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.629 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.945 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:58.945 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.945 18:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.945 18:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.945 18:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.203 18:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:21:00.136 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.136 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.136 18:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.136 18:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.136 18:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.136 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.136 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:00.136 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:00.394 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:00.394 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.394 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:00.394 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:00.394 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:00.395 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.395 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.395 18:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.395 18:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.395 18:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.395 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.395 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.655 00:21:00.655 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.655 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.655 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.914 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.914 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.914 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.914 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.914 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.914 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.914 { 00:21:00.914 "cntlid": 115, 00:21:00.914 "qid": 0, 00:21:00.914 "state": "enabled", 00:21:00.914 "listen_address": { 00:21:00.914 "trtype": "TCP", 00:21:00.914 "adrfam": "IPv4", 00:21:00.914 "traddr": "10.0.0.2", 00:21:00.914 "trsvcid": "4420" 00:21:00.914 }, 00:21:00.914 "peer_address": { 00:21:00.914 "trtype": "TCP", 00:21:00.914 "adrfam": "IPv4", 00:21:00.914 "traddr": "10.0.0.1", 00:21:00.914 "trsvcid": "44698" 00:21:00.914 }, 00:21:00.914 "auth": { 00:21:00.914 "state": "completed", 00:21:00.914 "digest": "sha512", 00:21:00.914 "dhgroup": "ffdhe3072" 00:21:00.914 } 00:21:00.914 } 00:21:00.914 ]' 00:21:00.914 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.914 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.914 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.914 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:00.914 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.914 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.914 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.914 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.172 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:21:02.105 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.105 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.105 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.105 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.105 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.105 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.105 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:02.105 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:02.672 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:02.672 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.672 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.672 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:02.672 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:02.672 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.672 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.672 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.672 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.672 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.672 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.672 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.930 00:21:02.930 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.930 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.930 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.187 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.187 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.187 18:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.187 18:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.187 18:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.187 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.187 { 00:21:03.187 "cntlid": 117, 00:21:03.187 "qid": 0, 00:21:03.187 "state": "enabled", 00:21:03.187 "listen_address": { 00:21:03.187 "trtype": "TCP", 00:21:03.187 "adrfam": "IPv4", 00:21:03.187 "traddr": "10.0.0.2", 00:21:03.187 "trsvcid": "4420" 00:21:03.187 }, 00:21:03.187 "peer_address": { 00:21:03.187 "trtype": "TCP", 00:21:03.187 "adrfam": "IPv4", 00:21:03.187 "traddr": "10.0.0.1", 00:21:03.187 "trsvcid": "44712" 00:21:03.187 }, 00:21:03.187 "auth": { 00:21:03.188 "state": "completed", 00:21:03.188 "digest": "sha512", 00:21:03.188 "dhgroup": "ffdhe3072" 00:21:03.188 } 00:21:03.188 } 00:21:03.188 ]' 00:21:03.188 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.188 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.188 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.188 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:03.188 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.188 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.188 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.188 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.445 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:21:04.378 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.378 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.378 18:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.378 18:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.378 18:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.378 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.378 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:04.378 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:04.636 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:04.636 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.636 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.636 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:04.636 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:04.636 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.636 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:04.636 18:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.636 18:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.636 18:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.636 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:04.636 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.202 00:21:05.202 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.202 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.202 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.460 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.460 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.460 18:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.460 18:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.460 18:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.460 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.460 { 00:21:05.460 "cntlid": 119, 00:21:05.460 "qid": 0, 00:21:05.460 "state": "enabled", 00:21:05.460 "listen_address": { 00:21:05.460 "trtype": "TCP", 00:21:05.460 "adrfam": "IPv4", 00:21:05.460 "traddr": "10.0.0.2", 00:21:05.460 "trsvcid": "4420" 00:21:05.460 }, 00:21:05.460 "peer_address": { 00:21:05.460 "trtype": "TCP", 00:21:05.460 "adrfam": "IPv4", 00:21:05.460 "traddr": "10.0.0.1", 00:21:05.460 "trsvcid": "41072" 00:21:05.460 }, 00:21:05.460 "auth": { 00:21:05.460 "state": "completed", 00:21:05.460 "digest": "sha512", 00:21:05.461 "dhgroup": "ffdhe3072" 00:21:05.461 } 00:21:05.461 } 00:21:05.461 ]' 00:21:05.461 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.461 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.461 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.461 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:05.461 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.461 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.461 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.461 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.719 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:21:06.653 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.653 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.653 18:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.653 18:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.653 18:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.653 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.653 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.653 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.653 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.910 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:06.910 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.910 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.910 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:06.910 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:06.910 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.910 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.910 18:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.910 18:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.910 18:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.910 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.910 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.167 00:21:07.430 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.430 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.430 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.710 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.710 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.710 18:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.710 18:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.710 18:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.710 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.710 { 00:21:07.710 "cntlid": 121, 00:21:07.710 "qid": 0, 00:21:07.710 "state": "enabled", 00:21:07.710 "listen_address": { 00:21:07.710 "trtype": "TCP", 00:21:07.710 "adrfam": "IPv4", 00:21:07.710 "traddr": "10.0.0.2", 00:21:07.710 "trsvcid": "4420" 00:21:07.710 }, 00:21:07.710 "peer_address": { 00:21:07.710 "trtype": "TCP", 00:21:07.710 "adrfam": "IPv4", 00:21:07.710 "traddr": "10.0.0.1", 00:21:07.710 "trsvcid": "41106" 00:21:07.710 }, 00:21:07.710 "auth": { 00:21:07.710 "state": "completed", 00:21:07.710 "digest": "sha512", 00:21:07.710 "dhgroup": "ffdhe4096" 00:21:07.710 } 00:21:07.710 } 00:21:07.710 ]' 00:21:07.710 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.710 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.710 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.711 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:07.711 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.711 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.711 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.711 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.967 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:21:08.897 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.897 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.897 18:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.897 18:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.897 18:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.897 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.897 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.897 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:09.154 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:09.154 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.154 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:09.154 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:09.154 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:09.154 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.154 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.155 18:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.155 18:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.155 18:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.155 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.155 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.720 00:21:09.720 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.720 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.720 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.720 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.720 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.720 18:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.720 18:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.720 18:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.720 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.720 { 00:21:09.720 "cntlid": 123, 00:21:09.720 "qid": 0, 00:21:09.720 "state": "enabled", 00:21:09.720 "listen_address": { 00:21:09.720 "trtype": "TCP", 00:21:09.720 "adrfam": "IPv4", 00:21:09.720 "traddr": "10.0.0.2", 00:21:09.720 "trsvcid": "4420" 00:21:09.720 }, 00:21:09.720 "peer_address": { 00:21:09.720 "trtype": "TCP", 00:21:09.720 "adrfam": "IPv4", 00:21:09.720 "traddr": "10.0.0.1", 00:21:09.720 "trsvcid": "41150" 00:21:09.720 }, 00:21:09.720 "auth": { 00:21:09.720 "state": "completed", 00:21:09.720 "digest": "sha512", 00:21:09.720 "dhgroup": "ffdhe4096" 00:21:09.720 } 00:21:09.720 } 00:21:09.720 ]' 00:21:09.721 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.978 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.978 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.978 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.978 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.978 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.978 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.978 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.234 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:21:11.168 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.168 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.168 18:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.168 18:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.168 18:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.168 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.168 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:11.168 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:11.425 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:11.425 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.425 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.425 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:11.425 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:11.425 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.425 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.425 18:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.425 18:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.425 18:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.425 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.425 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.990 00:21:11.990 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.990 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.990 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.246 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.246 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.246 18:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.246 18:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.246 18:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.247 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.247 { 00:21:12.247 "cntlid": 125, 00:21:12.247 "qid": 0, 00:21:12.247 "state": "enabled", 00:21:12.247 "listen_address": { 00:21:12.247 "trtype": "TCP", 00:21:12.247 "adrfam": "IPv4", 00:21:12.247 "traddr": "10.0.0.2", 00:21:12.247 "trsvcid": "4420" 00:21:12.247 }, 00:21:12.247 "peer_address": { 00:21:12.247 "trtype": "TCP", 00:21:12.247 "adrfam": "IPv4", 00:21:12.247 "traddr": "10.0.0.1", 00:21:12.247 "trsvcid": "41176" 00:21:12.247 }, 00:21:12.247 "auth": { 00:21:12.247 "state": "completed", 00:21:12.247 "digest": "sha512", 00:21:12.247 "dhgroup": "ffdhe4096" 00:21:12.247 } 00:21:12.247 } 00:21:12.247 ]' 00:21:12.247 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.247 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.247 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.247 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:12.247 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.247 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.247 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.247 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.503 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:21:13.435 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.435 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.435 18:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.435 18:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.435 18:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.435 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.435 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:13.435 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:13.692 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:13.692 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.692 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.692 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:13.692 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:13.692 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.693 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:13.693 18:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.693 18:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.693 18:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.693 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.693 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.257 00:21:14.257 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.257 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.257 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.514 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.514 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.514 18:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.514 18:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.514 18:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.514 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.514 { 00:21:14.514 "cntlid": 127, 00:21:14.514 "qid": 0, 00:21:14.514 "state": "enabled", 00:21:14.514 "listen_address": { 00:21:14.514 "trtype": "TCP", 00:21:14.514 "adrfam": "IPv4", 00:21:14.514 "traddr": "10.0.0.2", 00:21:14.514 "trsvcid": "4420" 00:21:14.514 }, 00:21:14.514 "peer_address": { 00:21:14.514 "trtype": "TCP", 00:21:14.514 "adrfam": "IPv4", 00:21:14.514 "traddr": "10.0.0.1", 00:21:14.514 "trsvcid": "58114" 00:21:14.514 }, 00:21:14.514 "auth": { 00:21:14.514 "state": "completed", 00:21:14.514 "digest": "sha512", 00:21:14.514 "dhgroup": "ffdhe4096" 00:21:14.514 } 00:21:14.514 } 00:21:14.514 ]' 00:21:14.514 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.514 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.514 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.514 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:14.514 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.514 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.514 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.514 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.770 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:21:15.699 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.699 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.699 18:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.700 18:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.700 18:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.700 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.700 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.700 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:15.700 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:15.956 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:15.956 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.956 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.956 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:15.956 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:15.956 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.956 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.956 18:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.956 18:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.956 18:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.956 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.956 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.520 00:21:16.520 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.520 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.520 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.777 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.777 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.777 18:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.777 18:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.777 18:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.777 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.777 { 00:21:16.777 "cntlid": 129, 00:21:16.777 "qid": 0, 00:21:16.777 "state": "enabled", 00:21:16.777 "listen_address": { 00:21:16.777 "trtype": "TCP", 00:21:16.777 "adrfam": "IPv4", 00:21:16.777 "traddr": "10.0.0.2", 00:21:16.777 "trsvcid": "4420" 00:21:16.777 }, 00:21:16.777 "peer_address": { 00:21:16.777 "trtype": "TCP", 00:21:16.777 "adrfam": "IPv4", 00:21:16.777 "traddr": "10.0.0.1", 00:21:16.777 "trsvcid": "58144" 00:21:16.777 }, 00:21:16.777 "auth": { 00:21:16.777 "state": "completed", 00:21:16.777 "digest": "sha512", 00:21:16.777 "dhgroup": "ffdhe6144" 00:21:16.777 } 00:21:16.777 } 00:21:16.777 ]' 00:21:16.777 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.034 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.034 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.034 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:17.034 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.034 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.034 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.034 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.291 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:21:18.220 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.220 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.220 18:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.220 18:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.220 18:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.220 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.220 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:18.220 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:18.478 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:18.478 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.478 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:18.478 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:18.478 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:18.478 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.478 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.478 18:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.478 18:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.478 18:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.478 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.478 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.042 00:21:19.042 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.042 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.042 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.299 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.299 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.299 18:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.299 18:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.299 18:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.299 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.299 { 00:21:19.299 "cntlid": 131, 00:21:19.299 "qid": 0, 00:21:19.299 "state": "enabled", 00:21:19.299 "listen_address": { 00:21:19.299 "trtype": "TCP", 00:21:19.299 "adrfam": "IPv4", 00:21:19.299 "traddr": "10.0.0.2", 00:21:19.299 "trsvcid": "4420" 00:21:19.299 }, 00:21:19.299 "peer_address": { 00:21:19.299 "trtype": "TCP", 00:21:19.299 "adrfam": "IPv4", 00:21:19.299 "traddr": "10.0.0.1", 00:21:19.299 "trsvcid": "58172" 00:21:19.299 }, 00:21:19.299 "auth": { 00:21:19.299 "state": "completed", 00:21:19.299 "digest": "sha512", 00:21:19.299 "dhgroup": "ffdhe6144" 00:21:19.299 } 00:21:19.299 } 00:21:19.299 ]' 00:21:19.299 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.299 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.299 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.299 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:19.299 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.299 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.299 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.299 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.556 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:21:20.529 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.529 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.529 18:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.529 18:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.529 18:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.529 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.529 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:20.529 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:20.787 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:20.787 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.787 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:20.787 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:20.787 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:20.787 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.787 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.787 18:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.787 18:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.787 18:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.787 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.787 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.353 00:21:21.353 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.353 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.353 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.612 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.612 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.612 18:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.612 18:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.612 18:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.612 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.612 { 00:21:21.612 "cntlid": 133, 00:21:21.612 "qid": 0, 00:21:21.612 "state": "enabled", 00:21:21.612 "listen_address": { 00:21:21.612 "trtype": "TCP", 00:21:21.612 "adrfam": "IPv4", 00:21:21.612 "traddr": "10.0.0.2", 00:21:21.612 "trsvcid": "4420" 00:21:21.612 }, 00:21:21.612 "peer_address": { 00:21:21.612 "trtype": "TCP", 00:21:21.612 "adrfam": "IPv4", 00:21:21.612 "traddr": "10.0.0.1", 00:21:21.612 "trsvcid": "58178" 00:21:21.612 }, 00:21:21.612 "auth": { 00:21:21.612 "state": "completed", 00:21:21.612 "digest": "sha512", 00:21:21.612 "dhgroup": "ffdhe6144" 00:21:21.612 } 00:21:21.612 } 00:21:21.612 ]' 00:21:21.612 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.612 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.612 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.612 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:21.612 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.612 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.612 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.612 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.870 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:21:22.800 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.800 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.800 18:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.800 18:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.800 18:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.800 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.800 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:22.800 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.056 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:23.056 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.056 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.056 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:23.056 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:23.056 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.056 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:23.056 18:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.056 18:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.056 18:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.056 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.056 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.618 00:21:23.618 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.618 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.618 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.874 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.874 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.874 18:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.874 18:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.874 18:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.874 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.874 { 00:21:23.874 "cntlid": 135, 00:21:23.874 "qid": 0, 00:21:23.874 "state": "enabled", 00:21:23.874 "listen_address": { 00:21:23.874 "trtype": "TCP", 00:21:23.874 "adrfam": "IPv4", 00:21:23.874 "traddr": "10.0.0.2", 00:21:23.874 "trsvcid": "4420" 00:21:23.874 }, 00:21:23.874 "peer_address": { 00:21:23.874 "trtype": "TCP", 00:21:23.874 "adrfam": "IPv4", 00:21:23.874 "traddr": "10.0.0.1", 00:21:23.874 "trsvcid": "58200" 00:21:23.874 }, 00:21:23.874 "auth": { 00:21:23.874 "state": "completed", 00:21:23.874 "digest": "sha512", 00:21:23.874 "dhgroup": "ffdhe6144" 00:21:23.874 } 00:21:23.874 } 00:21:23.874 ]' 00:21:23.874 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.131 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.131 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.131 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.131 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.131 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.131 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.131 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.388 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:21:25.319 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.319 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.319 18:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.319 18:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.319 18:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.319 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.319 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.319 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:25.319 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:25.575 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:25.576 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.576 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:25.576 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:25.576 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:25.576 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.576 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.576 18:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.576 18:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.576 18:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.576 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.576 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.506 00:21:26.506 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.506 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.506 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.763 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.763 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.763 18:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.763 18:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.763 18:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.763 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.763 { 00:21:26.763 "cntlid": 137, 00:21:26.763 "qid": 0, 00:21:26.763 "state": "enabled", 00:21:26.763 "listen_address": { 00:21:26.763 "trtype": "TCP", 00:21:26.763 "adrfam": "IPv4", 00:21:26.763 "traddr": "10.0.0.2", 00:21:26.763 "trsvcid": "4420" 00:21:26.763 }, 00:21:26.763 "peer_address": { 00:21:26.763 "trtype": "TCP", 00:21:26.763 "adrfam": "IPv4", 00:21:26.763 "traddr": "10.0.0.1", 00:21:26.763 "trsvcid": "53766" 00:21:26.763 }, 00:21:26.763 "auth": { 00:21:26.763 "state": "completed", 00:21:26.763 "digest": "sha512", 00:21:26.763 "dhgroup": "ffdhe8192" 00:21:26.763 } 00:21:26.763 } 00:21:26.763 ]' 00:21:26.763 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.763 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.763 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.021 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:27.021 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.021 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.021 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.021 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.279 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:21:28.210 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.210 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.210 18:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.210 18:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.210 18:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.210 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.210 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:28.210 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:28.467 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:28.467 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.467 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.467 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:28.467 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:28.467 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.467 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.467 18:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.467 18:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.467 18:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.467 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.467 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.399 00:21:29.399 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.399 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.399 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.656 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.656 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.656 18:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.656 18:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.656 18:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.656 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.656 { 00:21:29.656 "cntlid": 139, 00:21:29.656 "qid": 0, 00:21:29.656 "state": "enabled", 00:21:29.656 "listen_address": { 00:21:29.656 "trtype": "TCP", 00:21:29.656 "adrfam": "IPv4", 00:21:29.656 "traddr": "10.0.0.2", 00:21:29.656 "trsvcid": "4420" 00:21:29.656 }, 00:21:29.656 "peer_address": { 00:21:29.656 "trtype": "TCP", 00:21:29.656 "adrfam": "IPv4", 00:21:29.656 "traddr": "10.0.0.1", 00:21:29.656 "trsvcid": "53788" 00:21:29.656 }, 00:21:29.656 "auth": { 00:21:29.656 "state": "completed", 00:21:29.656 "digest": "sha512", 00:21:29.656 "dhgroup": "ffdhe8192" 00:21:29.656 } 00:21:29.656 } 00:21:29.656 ]' 00:21:29.656 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.656 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.656 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.656 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.656 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.656 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.656 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.656 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.914 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTBmNjA0NTljMjZjNjYyOGY2NmRkOGYyNmUwOGMwNTIu2TL+: --dhchap-ctrl-secret DHHC-1:02:Mjg0NTBlYzdlMTg4NGI2ODJlZjk5M2I2YWE4MDQzMDRlMmI0MWNmMDRkYjdlN2RjeXTjkw==: 00:21:30.847 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.847 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.847 18:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.847 18:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.847 18:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.847 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.847 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.847 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:31.104 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:31.104 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.104 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.104 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:31.104 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:31.104 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.104 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.104 18:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.104 18:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.104 18:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.104 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.104 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.036 00:21:32.036 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.036 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.036 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.293 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.293 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.293 18:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.293 18:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.293 18:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.293 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.293 { 00:21:32.293 "cntlid": 141, 00:21:32.293 "qid": 0, 00:21:32.293 "state": "enabled", 00:21:32.293 "listen_address": { 00:21:32.293 "trtype": "TCP", 00:21:32.293 "adrfam": "IPv4", 00:21:32.293 "traddr": "10.0.0.2", 00:21:32.293 "trsvcid": "4420" 00:21:32.293 }, 00:21:32.293 "peer_address": { 00:21:32.293 "trtype": "TCP", 00:21:32.293 "adrfam": "IPv4", 00:21:32.293 "traddr": "10.0.0.1", 00:21:32.293 "trsvcid": "53824" 00:21:32.293 }, 00:21:32.293 "auth": { 00:21:32.293 "state": "completed", 00:21:32.293 "digest": "sha512", 00:21:32.293 "dhgroup": "ffdhe8192" 00:21:32.293 } 00:21:32.293 } 00:21:32.293 ]' 00:21:32.293 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.293 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.293 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.293 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.293 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.293 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.293 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.293 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.550 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OWNmNGFlMjJiNTZhMDIzM2FlZjEyNTgyOGFjMDU4MGUzOTk3ZmU2ZGJiYmVjZTJkeC3lbA==: --dhchap-ctrl-secret DHHC-1:01:NjliNzExZGFmMTE4NzM0MWJlYTBiYWNiMzExYWQwN2GVe8VP: 00:21:33.923 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.923 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.923 18:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.923 18:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.923 18:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.923 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.923 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.923 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.923 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:33.923 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.923 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.923 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:33.923 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:33.923 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.923 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:33.923 18:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.923 18:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.923 18:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.923 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.923 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.854 00:21:34.854 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.854 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.854 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.112 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.112 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.112 18:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.112 18:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.112 18:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.112 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.112 { 00:21:35.112 "cntlid": 143, 00:21:35.112 "qid": 0, 00:21:35.112 "state": "enabled", 00:21:35.112 "listen_address": { 00:21:35.112 "trtype": "TCP", 00:21:35.112 "adrfam": "IPv4", 00:21:35.112 "traddr": "10.0.0.2", 00:21:35.112 "trsvcid": "4420" 00:21:35.112 }, 00:21:35.112 "peer_address": { 00:21:35.112 "trtype": "TCP", 00:21:35.112 "adrfam": "IPv4", 00:21:35.112 "traddr": "10.0.0.1", 00:21:35.112 "trsvcid": "36510" 00:21:35.112 }, 00:21:35.112 "auth": { 00:21:35.112 "state": "completed", 00:21:35.112 "digest": "sha512", 00:21:35.112 "dhgroup": "ffdhe8192" 00:21:35.112 } 00:21:35.112 } 00:21:35.112 ]' 00:21:35.112 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.112 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.112 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.112 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.112 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.112 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.112 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.112 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.369 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:21:36.302 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.302 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.302 18:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.302 18:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.302 18:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.302 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:36.302 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:36.302 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:36.302 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:36.302 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:36.302 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:36.561 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:36.561 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.561 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.561 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:36.561 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:36.561 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.561 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.561 18:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.561 18:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.561 18:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.561 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.561 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.496 00:21:37.496 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.496 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.496 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.754 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.754 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.754 18:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.754 18:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.754 18:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.754 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.754 { 00:21:37.754 "cntlid": 145, 00:21:37.754 "qid": 0, 00:21:37.754 "state": "enabled", 00:21:37.754 "listen_address": { 00:21:37.754 "trtype": "TCP", 00:21:37.754 "adrfam": "IPv4", 00:21:37.754 "traddr": "10.0.0.2", 00:21:37.754 "trsvcid": "4420" 00:21:37.754 }, 00:21:37.754 "peer_address": { 00:21:37.754 "trtype": "TCP", 00:21:37.754 "adrfam": "IPv4", 00:21:37.754 "traddr": "10.0.0.1", 00:21:37.754 "trsvcid": "36538" 00:21:37.754 }, 00:21:37.754 "auth": { 00:21:37.754 "state": "completed", 00:21:37.754 "digest": "sha512", 00:21:37.754 "dhgroup": "ffdhe8192" 00:21:37.754 } 00:21:37.754 } 00:21:37.754 ]' 00:21:37.754 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.754 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.754 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.013 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:38.013 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.013 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.013 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.013 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.271 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDllMzVkMjk1YzcwM2NkY2MxNTM4ODMyN2E1MWVjMDczMTgwYzY1ZjI5NmQzOWJhkXpF0A==: --dhchap-ctrl-secret DHHC-1:03:OTA1MDM5ZWQxOGZhN2M3OGNjYzk2NDU0OTI0NjE4MjRhZDE1NjhhY2Q5Y2RlMDM5ZDM4NGNhNjQ4NmNkMWVjOfej62w=: 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:39.203 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:40.136 request: 00:21:40.136 { 00:21:40.136 "name": "nvme0", 00:21:40.136 "trtype": "tcp", 00:21:40.136 "traddr": "10.0.0.2", 00:21:40.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.136 "adrfam": "ipv4", 00:21:40.136 "trsvcid": "4420", 00:21:40.136 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.136 "dhchap_key": "key2", 00:21:40.136 "method": "bdev_nvme_attach_controller", 00:21:40.136 "req_id": 1 00:21:40.136 } 00:21:40.136 Got JSON-RPC error response 00:21:40.136 response: 00:21:40.136 { 00:21:40.136 "code": -5, 00:21:40.136 "message": "Input/output error" 00:21:40.136 } 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.136 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:41.066 request: 00:21:41.066 { 00:21:41.066 "name": "nvme0", 00:21:41.066 "trtype": "tcp", 00:21:41.066 "traddr": "10.0.0.2", 00:21:41.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.066 "adrfam": "ipv4", 00:21:41.066 "trsvcid": "4420", 00:21:41.066 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:41.066 "dhchap_key": "key1", 00:21:41.066 "dhchap_ctrlr_key": "ckey2", 00:21:41.066 "method": "bdev_nvme_attach_controller", 00:21:41.066 "req_id": 1 00:21:41.066 } 00:21:41.066 Got JSON-RPC error response 00:21:41.066 response: 00:21:41.066 { 00:21:41.066 "code": -5, 00:21:41.066 "message": "Input/output error" 00:21:41.066 } 00:21:41.066 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.067 18:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.631 request: 00:21:41.631 { 00:21:41.631 "name": "nvme0", 00:21:41.631 "trtype": "tcp", 00:21:41.631 "traddr": "10.0.0.2", 00:21:41.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.631 "adrfam": "ipv4", 00:21:41.631 "trsvcid": "4420", 00:21:41.631 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:41.631 "dhchap_key": "key1", 00:21:41.631 "dhchap_ctrlr_key": "ckey1", 00:21:41.631 "method": "bdev_nvme_attach_controller", 00:21:41.631 "req_id": 1 00:21:41.631 } 00:21:41.631 Got JSON-RPC error response 00:21:41.631 response: 00:21:41.631 { 00:21:41.631 "code": -5, 00:21:41.631 "message": "Input/output error" 00:21:41.631 } 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1395343 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 1395343 ']' 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 1395343 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1395343 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1395343' 00:21:41.889 killing process with pid 1395343 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 1395343 00:21:41.889 18:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 1395343 00:21:42.147 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:42.147 18:51:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.147 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:42.147 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.147 18:51:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1417607 00:21:42.147 18:51:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:42.147 18:51:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1417607 00:21:42.147 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1417607 ']' 00:21:42.147 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.147 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:42.147 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.147 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:42.147 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.403 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:42.403 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:21:42.403 18:51:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.403 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:42.403 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.403 18:51:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.403 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:42.403 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1417607 00:21:42.403 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1417607 ']' 00:21:42.403 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.403 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:42.403 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.404 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:42.404 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.691 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.622 00:21:43.622 18:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.622 18:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.622 18:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.880 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.880 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.880 18:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.880 18:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.880 18:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.880 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.880 { 00:21:43.880 "cntlid": 1, 00:21:43.880 "qid": 0, 00:21:43.880 "state": "enabled", 00:21:43.880 "listen_address": { 00:21:43.880 "trtype": "TCP", 00:21:43.880 "adrfam": "IPv4", 00:21:43.880 "traddr": "10.0.0.2", 00:21:43.880 "trsvcid": "4420" 00:21:43.880 }, 00:21:43.880 "peer_address": { 00:21:43.880 "trtype": "TCP", 00:21:43.880 "adrfam": "IPv4", 00:21:43.880 "traddr": "10.0.0.1", 00:21:43.880 "trsvcid": "36576" 00:21:43.880 }, 00:21:43.880 "auth": { 00:21:43.880 "state": "completed", 00:21:43.880 "digest": "sha512", 00:21:43.880 "dhgroup": "ffdhe8192" 00:21:43.880 } 00:21:43.880 } 00:21:43.880 ]' 00:21:43.880 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.880 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.880 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.880 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:43.880 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.138 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.138 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.138 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.395 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MDU5ODNiNWQyMTZjZjc2ZmIyZDMxNTc1ZGEzYWRlNjdkOTE4YmE3YWRmZmFmNjQ1MjMxNTdjM2U3ZDY3NzA4MZTqCOQ=: 00:21:45.328 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.328 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.328 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.328 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.328 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.328 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:45.328 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.328 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.328 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.328 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:45.328 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:45.586 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.586 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:45.586 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.586 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:45.586 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.586 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:45.586 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.586 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.586 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.845 request: 00:21:45.845 { 00:21:45.845 "name": "nvme0", 00:21:45.845 "trtype": "tcp", 00:21:45.845 "traddr": "10.0.0.2", 00:21:45.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:45.845 "adrfam": "ipv4", 00:21:45.845 "trsvcid": "4420", 00:21:45.845 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:45.845 "dhchap_key": "key3", 00:21:45.845 "method": "bdev_nvme_attach_controller", 00:21:45.845 "req_id": 1 00:21:45.845 } 00:21:45.845 Got JSON-RPC error response 00:21:45.845 response: 00:21:45.845 { 00:21:45.845 "code": -5, 00:21:45.845 "message": "Input/output error" 00:21:45.845 } 00:21:45.845 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:45.845 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:45.845 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:45.845 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:45.845 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:45.845 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:45.845 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:45.845 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:46.110 18:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.110 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:46.110 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.110 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:46.110 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:46.110 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:46.110 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:46.110 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.110 18:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.110 request: 00:21:46.110 { 00:21:46.110 "name": "nvme0", 00:21:46.110 "trtype": "tcp", 00:21:46.110 "traddr": "10.0.0.2", 00:21:46.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:46.110 "adrfam": "ipv4", 00:21:46.110 "trsvcid": "4420", 00:21:46.110 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:46.110 "dhchap_key": "key3", 00:21:46.110 "method": "bdev_nvme_attach_controller", 00:21:46.110 "req_id": 1 00:21:46.110 } 00:21:46.110 Got JSON-RPC error response 00:21:46.110 response: 00:21:46.110 { 00:21:46.110 "code": -5, 00:21:46.110 "message": "Input/output error" 00:21:46.110 } 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.403 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.661 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.661 18:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:46.661 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:46.661 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:46.661 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:46.661 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:46.661 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:46.661 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:46.661 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:46.661 18:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:46.661 request: 00:21:46.661 { 00:21:46.661 "name": "nvme0", 00:21:46.661 "trtype": "tcp", 00:21:46.661 "traddr": "10.0.0.2", 00:21:46.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:46.661 "adrfam": "ipv4", 00:21:46.661 "trsvcid": "4420", 00:21:46.661 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:46.661 "dhchap_key": "key0", 00:21:46.661 "dhchap_ctrlr_key": "key1", 00:21:46.661 "method": "bdev_nvme_attach_controller", 00:21:46.661 "req_id": 1 00:21:46.661 } 00:21:46.661 Got JSON-RPC error response 00:21:46.661 response: 00:21:46.661 { 00:21:46.661 "code": -5, 00:21:46.661 "message": "Input/output error" 00:21:46.661 } 00:21:46.919 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:46.919 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:46.919 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:46.919 18:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:46.919 18:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:46.919 18:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:47.177 00:21:47.177 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:47.177 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:47.177 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.435 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.435 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.435 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.695 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:47.695 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:47.695 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1395377 00:21:47.695 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 1395377 ']' 00:21:47.695 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 1395377 00:21:47.695 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:21:47.695 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:47.695 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1395377 00:21:47.695 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:47.695 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:47.695 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1395377' 00:21:47.695 killing process with pid 1395377 00:21:47.695 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 1395377 00:21:47.695 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 1395377 00:21:47.954 18:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:47.954 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:47.954 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:47.954 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.954 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:47.954 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.954 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.954 rmmod nvme_tcp 00:21:47.954 rmmod nvme_fabrics 00:21:48.212 rmmod nvme_keyring 00:21:48.212 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:48.212 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:48.212 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:48.212 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1417607 ']' 00:21:48.212 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1417607 00:21:48.212 18:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 1417607 ']' 00:21:48.212 18:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 1417607 00:21:48.212 18:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:21:48.212 18:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:48.212 18:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1417607 00:21:48.212 18:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:48.212 18:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:48.212 18:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1417607' 00:21:48.212 killing process with pid 1417607 00:21:48.212 18:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 1417607 00:21:48.212 18:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 1417607 00:21:48.471 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:48.471 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:48.471 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:48.471 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:48.471 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:48.471 18:51:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.471 18:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.471 18:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.376 18:52:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:50.376 18:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ETz /tmp/spdk.key-sha256.Hmh /tmp/spdk.key-sha384.WOA /tmp/spdk.key-sha512.gGF /tmp/spdk.key-sha512.1RM /tmp/spdk.key-sha384.zUx /tmp/spdk.key-sha256.Ata '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:50.376 00:21:50.376 real 3m7.228s 00:21:50.376 user 7m16.955s 00:21:50.376 sys 0m22.352s 00:21:50.376 18:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:50.376 18:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.376 ************************************ 00:21:50.376 END TEST nvmf_auth_target 00:21:50.376 ************************************ 00:21:50.376 18:52:00 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:50.376 18:52:00 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:50.376 18:52:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:21:50.376 18:52:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:50.376 18:52:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:50.376 ************************************ 00:21:50.376 START TEST nvmf_bdevio_no_huge 00:21:50.376 ************************************ 00:21:50.376 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:50.642 * Looking for test storage... 00:21:50.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:50.642 18:52:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:52.544 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.544 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:52.545 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:52.545 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:52.545 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:52.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:21:52.545 00:21:52.545 --- 10.0.0.2 ping statistics --- 00:21:52.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.545 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:21:52.545 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:21:52.545 00:21:52.545 --- 10.0.0.1 ping statistics --- 00:21:52.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.545 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1420274 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1420274 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 1420274 ']' 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:52.803 18:52:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.803 [2024-07-20 18:52:02.938597] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:21:52.803 [2024-07-20 18:52:02.938678] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:52.803 [2024-07-20 18:52:03.008506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.803 [2024-07-20 18:52:03.088591] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.803 [2024-07-20 18:52:03.088647] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.803 [2024-07-20 18:52:03.088676] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.803 [2024-07-20 18:52:03.088688] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.803 [2024-07-20 18:52:03.088697] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.803 [2024-07-20 18:52:03.088785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:52.803 [2024-07-20 18:52:03.088852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:52.803 [2024-07-20 18:52:03.088899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:52.803 [2024-07-20 18:52:03.088902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:53.060 [2024-07-20 18:52:03.198976] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:53.060 Malloc0 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:53.060 [2024-07-20 18:52:03.236555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.060 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.060 { 00:21:53.060 "params": { 00:21:53.060 "name": "Nvme$subsystem", 00:21:53.060 "trtype": "$TEST_TRANSPORT", 00:21:53.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.060 "adrfam": "ipv4", 00:21:53.060 "trsvcid": "$NVMF_PORT", 00:21:53.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.060 "hdgst": ${hdgst:-false}, 00:21:53.060 "ddgst": ${ddgst:-false} 00:21:53.060 }, 00:21:53.060 "method": "bdev_nvme_attach_controller" 00:21:53.060 } 00:21:53.060 EOF 00:21:53.061 )") 00:21:53.061 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:53.061 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:53.061 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:53.061 18:52:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:53.061 "params": { 00:21:53.061 "name": "Nvme1", 00:21:53.061 "trtype": "tcp", 00:21:53.061 "traddr": "10.0.0.2", 00:21:53.061 "adrfam": "ipv4", 00:21:53.061 "trsvcid": "4420", 00:21:53.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:53.061 "hdgst": false, 00:21:53.061 "ddgst": false 00:21:53.061 }, 00:21:53.061 "method": "bdev_nvme_attach_controller" 00:21:53.061 }' 00:21:53.061 [2024-07-20 18:52:03.279322] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:21:53.061 [2024-07-20 18:52:03.279401] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1420302 ] 00:21:53.061 [2024-07-20 18:52:03.340183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:53.318 [2024-07-20 18:52:03.426883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.318 [2024-07-20 18:52:03.426933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.318 [2024-07-20 18:52:03.426936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.574 I/O targets: 00:21:53.574 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:53.574 00:21:53.574 00:21:53.574 CUnit - A unit testing framework for C - Version 2.1-3 00:21:53.574 http://cunit.sourceforge.net/ 00:21:53.574 00:21:53.574 00:21:53.574 Suite: bdevio tests on: Nvme1n1 00:21:53.574 Test: blockdev write read block ...passed 00:21:53.574 Test: blockdev write zeroes read block ...passed 00:21:53.574 Test: blockdev write zeroes read no split ...passed 00:21:53.830 Test: blockdev write zeroes read split ...passed 00:21:53.830 Test: blockdev write zeroes read split partial ...passed 00:21:53.830 Test: blockdev reset ...[2024-07-20 18:52:03.975348] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:53.830 [2024-07-20 18:52:03.975462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cda00 (9): Bad file descriptor 00:21:53.830 [2024-07-20 18:52:04.032093] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:53.830 passed 00:21:53.830 Test: blockdev write read 8 blocks ...passed 00:21:53.830 Test: blockdev write read size > 128k ...passed 00:21:53.830 Test: blockdev write read invalid size ...passed 00:21:53.830 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:53.830 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:53.830 Test: blockdev write read max offset ...passed 00:21:54.087 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:54.087 Test: blockdev writev readv 8 blocks ...passed 00:21:54.087 Test: blockdev writev readv 30 x 1block ...passed 00:21:54.087 Test: blockdev writev readv block ...passed 00:21:54.087 Test: blockdev writev readv size > 128k ...passed 00:21:54.087 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:54.087 Test: blockdev comparev and writev ...[2024-07-20 18:52:04.255532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:54.087 [2024-07-20 18:52:04.255574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:54.087 [2024-07-20 18:52:04.255600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:54.087 [2024-07-20 18:52:04.255618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:54.087 [2024-07-20 18:52:04.256086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:54.087 [2024-07-20 18:52:04.256110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:54.088 [2024-07-20 18:52:04.256133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:54.088 [2024-07-20 18:52:04.256150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:54.088 [2024-07-20 18:52:04.256635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:54.088 [2024-07-20 18:52:04.256664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:54.088 [2024-07-20 18:52:04.256688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:54.088 [2024-07-20 18:52:04.256707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:54.088 [2024-07-20 18:52:04.257180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:54.088 [2024-07-20 18:52:04.257204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:54.088 [2024-07-20 18:52:04.257226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:54.088 [2024-07-20 18:52:04.257243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:54.088 passed 00:21:54.088 Test: blockdev nvme passthru rw ...passed 00:21:54.088 Test: blockdev nvme passthru vendor specific ...[2024-07-20 18:52:04.341321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:54.088 [2024-07-20 18:52:04.341347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:54.088 [2024-07-20 18:52:04.341636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:54.088 [2024-07-20 18:52:04.341657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:54.088 [2024-07-20 18:52:04.341954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:54.088 [2024-07-20 18:52:04.341978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:54.088 [2024-07-20 18:52:04.342279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:54.088 [2024-07-20 18:52:04.342301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:54.088 passed 00:21:54.088 Test: blockdev nvme admin passthru ...passed 00:21:54.088 Test: blockdev copy ...passed 00:21:54.088 00:21:54.088 Run Summary: Type Total Ran Passed Failed Inactive 00:21:54.088 suites 1 1 n/a 0 0 00:21:54.088 tests 23 23 23 0 0 00:21:54.088 asserts 152 152 152 0 n/a 00:21:54.088 00:21:54.088 Elapsed time = 1.311 seconds 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:54.653 rmmod nvme_tcp 00:21:54.653 rmmod nvme_fabrics 00:21:54.653 rmmod nvme_keyring 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1420274 ']' 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1420274 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 1420274 ']' 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 1420274 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1420274 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1420274' 00:21:54.653 killing process with pid 1420274 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 1420274 00:21:54.653 18:52:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 1420274 00:21:54.911 18:52:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:54.911 18:52:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:54.911 18:52:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:54.911 18:52:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.911 18:52:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:54.911 18:52:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.911 18:52:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.911 18:52:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.442 18:52:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:57.442 00:21:57.442 real 0m6.539s 00:21:57.442 user 0m11.243s 00:21:57.442 sys 0m2.545s 00:21:57.442 18:52:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:57.442 18:52:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.442 ************************************ 00:21:57.442 END TEST nvmf_bdevio_no_huge 00:21:57.442 ************************************ 00:21:57.442 18:52:07 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:57.442 18:52:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:57.442 18:52:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:57.442 18:52:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:57.442 ************************************ 00:21:57.442 START TEST nvmf_tls 00:21:57.442 ************************************ 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:57.442 * Looking for test storage... 00:21:57.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.442 18:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:59.343 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:59.343 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:59.344 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:59.344 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:59.344 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:59.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:21:59.344 00:21:59.344 --- 10.0.0.2 ping statistics --- 00:21:59.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.344 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:21:59.344 00:21:59.344 --- 10.0.0.1 ping statistics --- 00:21:59.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.344 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1422488 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1422488 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1422488 ']' 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:59.344 18:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.344 [2024-07-20 18:52:09.518863] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:21:59.344 [2024-07-20 18:52:09.518958] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.344 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.344 [2024-07-20 18:52:09.588062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.602 [2024-07-20 18:52:09.676565] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.602 [2024-07-20 18:52:09.676629] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.602 [2024-07-20 18:52:09.676642] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.602 [2024-07-20 18:52:09.676668] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.602 [2024-07-20 18:52:09.676678] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.602 [2024-07-20 18:52:09.676705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.602 18:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:59.602 18:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:59.602 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:59.602 18:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.602 18:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.602 18:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.602 18:52:09 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:59.602 18:52:09 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:59.859 true 00:21:59.859 18:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:59.859 18:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:00.116 18:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:00.116 18:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:00.116 18:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:00.376 18:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:00.376 18:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:00.640 18:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:00.640 18:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:00.640 18:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:00.897 18:52:11 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:00.897 18:52:11 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:01.154 18:52:11 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:01.154 18:52:11 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:01.155 18:52:11 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.155 18:52:11 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:01.412 18:52:11 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:01.413 18:52:11 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:01.413 18:52:11 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:01.670 18:52:11 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.670 18:52:11 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:01.928 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:01.928 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:01.928 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:02.186 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:02.186 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.UtJfFII5Dd 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.1VRy3WDkZn 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.UtJfFII5Dd 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.1VRy3WDkZn 00:22:02.445 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:02.703 18:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:02.961 18:52:13 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.UtJfFII5Dd 00:22:02.961 18:52:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UtJfFII5Dd 00:22:02.961 18:52:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:03.219 [2024-07-20 18:52:13.532653] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.477 18:52:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:03.477 18:52:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:03.734 [2024-07-20 18:52:14.021955] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:03.734 [2024-07-20 18:52:14.022196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.734 18:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:03.992 malloc0 00:22:03.992 18:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:04.249 18:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UtJfFII5Dd 00:22:04.507 [2024-07-20 18:52:14.775945] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:04.507 18:52:14 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.UtJfFII5Dd 00:22:04.507 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.707 Initializing NVMe Controllers 00:22:16.707 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:16.707 Initialization complete. Launching workers. 00:22:16.707 ======================================================== 00:22:16.708 Latency(us) 00:22:16.708 Device Information : IOPS MiB/s Average min max 00:22:16.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7770.90 30.36 8238.68 1234.25 9333.03 00:22:16.708 ======================================================== 00:22:16.708 Total : 7770.90 30.36 8238.68 1234.25 9333.03 00:22:16.708 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UtJfFII5Dd 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UtJfFII5Dd' 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1424258 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1424258 /var/tmp/bdevperf.sock 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1424258 ']' 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:16.708 18:52:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.708 [2024-07-20 18:52:24.943035] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:16.708 [2024-07-20 18:52:24.943126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1424258 ] 00:22:16.708 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.708 [2024-07-20 18:52:25.004667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.708 [2024-07-20 18:52:25.095723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.708 18:52:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:16.708 18:52:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:16.708 18:52:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UtJfFII5Dd 00:22:16.708 [2024-07-20 18:52:25.422284] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.708 [2024-07-20 18:52:25.422391] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:16.708 TLSTESTn1 00:22:16.708 18:52:25 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:16.708 Running I/O for 10 seconds... 00:22:26.666 00:22:26.666 Latency(us) 00:22:26.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.666 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:26.666 Verification LBA range: start 0x0 length 0x2000 00:22:26.666 TLSTESTn1 : 10.15 719.52 2.81 0.00 0.00 176969.70 6213.78 217482.43 00:22:26.666 =================================================================================================================== 00:22:26.666 Total : 719.52 2.81 0.00 0.00 176969.70 6213.78 217482.43 00:22:26.666 0 00:22:26.666 18:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:26.666 18:52:35 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1424258 00:22:26.666 18:52:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1424258 ']' 00:22:26.666 18:52:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1424258 00:22:26.666 18:52:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:26.666 18:52:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:26.666 18:52:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1424258 00:22:26.666 18:52:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:26.666 18:52:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:26.666 18:52:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1424258' 00:22:26.666 killing process with pid 1424258 00:22:26.666 18:52:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1424258 00:22:26.666 Received shutdown signal, test time was about 10.000000 seconds 00:22:26.666 00:22:26.666 Latency(us) 00:22:26.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.666 =================================================================================================================== 00:22:26.666 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:26.666 [2024-07-20 18:52:35.856489] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:26.666 18:52:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1424258 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1VRy3WDkZn 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1VRy3WDkZn 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1VRy3WDkZn 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1VRy3WDkZn' 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1425577 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:26.666 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1425577 /var/tmp/bdevperf.sock 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1425577 ']' 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.667 [2024-07-20 18:52:36.124385] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:26.667 [2024-07-20 18:52:36.124474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1425577 ] 00:22:26.667 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.667 [2024-07-20 18:52:36.184859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.667 [2024-07-20 18:52:36.267223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1VRy3WDkZn 00:22:26.667 [2024-07-20 18:52:36.610209] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:26.667 [2024-07-20 18:52:36.610325] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:26.667 [2024-07-20 18:52:36.616316] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:26.667 [2024-07-20 18:52:36.617202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1870ed0 (107): Transport endpoint is not connected 00:22:26.667 [2024-07-20 18:52:36.618190] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1870ed0 (9): Bad file descriptor 00:22:26.667 [2024-07-20 18:52:36.619190] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:26.667 [2024-07-20 18:52:36.619209] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:26.667 [2024-07-20 18:52:36.619241] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.667 request: 00:22:26.667 { 00:22:26.667 "name": "TLSTEST", 00:22:26.667 "trtype": "tcp", 00:22:26.667 "traddr": "10.0.0.2", 00:22:26.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:26.667 "adrfam": "ipv4", 00:22:26.667 "trsvcid": "4420", 00:22:26.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.667 "psk": "/tmp/tmp.1VRy3WDkZn", 00:22:26.667 "method": "bdev_nvme_attach_controller", 00:22:26.667 "req_id": 1 00:22:26.667 } 00:22:26.667 Got JSON-RPC error response 00:22:26.667 response: 00:22:26.667 { 00:22:26.667 "code": -5, 00:22:26.667 "message": "Input/output error" 00:22:26.667 } 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1425577 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1425577 ']' 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1425577 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1425577 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1425577' 00:22:26.667 killing process with pid 1425577 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1425577 00:22:26.667 Received shutdown signal, test time was about 10.000000 seconds 00:22:26.667 00:22:26.667 Latency(us) 00:22:26.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.667 =================================================================================================================== 00:22:26.667 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:26.667 [2024-07-20 18:52:36.664115] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1425577 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UtJfFII5Dd 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UtJfFII5Dd 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UtJfFII5Dd 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UtJfFII5Dd' 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1425712 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1425712 /var/tmp/bdevperf.sock 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1425712 ']' 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:26.667 18:52:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.667 [2024-07-20 18:52:36.902925] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:26.667 [2024-07-20 18:52:36.903013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1425712 ] 00:22:26.667 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.667 [2024-07-20 18:52:36.966299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.925 [2024-07-20 18:52:37.050406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.925 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:26.925 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:26.925 18:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.UtJfFII5Dd 00:22:27.183 [2024-07-20 18:52:37.374864] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.183 [2024-07-20 18:52:37.374976] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:27.183 [2024-07-20 18:52:37.379983] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:27.183 [2024-07-20 18:52:37.380014] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:27.183 [2024-07-20 18:52:37.380068] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:27.183 [2024-07-20 18:52:37.380645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d51ed0 (107): Transport endpoint is not connected 00:22:27.183 [2024-07-20 18:52:37.381634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d51ed0 (9): Bad file descriptor 00:22:27.183 [2024-07-20 18:52:37.382632] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:27.183 [2024-07-20 18:52:37.382651] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:27.183 [2024-07-20 18:52:37.382682] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:27.183 request: 00:22:27.183 { 00:22:27.183 "name": "TLSTEST", 00:22:27.183 "trtype": "tcp", 00:22:27.183 "traddr": "10.0.0.2", 00:22:27.183 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:27.183 "adrfam": "ipv4", 00:22:27.183 "trsvcid": "4420", 00:22:27.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.183 "psk": "/tmp/tmp.UtJfFII5Dd", 00:22:27.183 "method": "bdev_nvme_attach_controller", 00:22:27.183 "req_id": 1 00:22:27.183 } 00:22:27.183 Got JSON-RPC error response 00:22:27.183 response: 00:22:27.183 { 00:22:27.183 "code": -5, 00:22:27.183 "message": "Input/output error" 00:22:27.183 } 00:22:27.183 18:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1425712 00:22:27.183 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1425712 ']' 00:22:27.183 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1425712 00:22:27.183 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:27.183 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:27.183 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1425712 00:22:27.183 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:27.183 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:27.183 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1425712' 00:22:27.183 killing process with pid 1425712 00:22:27.183 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1425712 00:22:27.183 Received shutdown signal, test time was about 10.000000 seconds 00:22:27.183 00:22:27.183 Latency(us) 00:22:27.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.183 =================================================================================================================== 00:22:27.183 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:27.183 [2024-07-20 18:52:37.430159] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:27.183 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1425712 00:22:27.440 18:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:27.440 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:27.440 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:27.440 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:27.440 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:27.440 18:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UtJfFII5Dd 00:22:27.440 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:27.440 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UtJfFII5Dd 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UtJfFII5Dd 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UtJfFII5Dd' 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1425816 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1425816 /var/tmp/bdevperf.sock 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1425816 ']' 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:27.441 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.441 [2024-07-20 18:52:37.683564] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:27.441 [2024-07-20 18:52:37.683645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1425816 ] 00:22:27.441 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.441 [2024-07-20 18:52:37.741241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.700 [2024-07-20 18:52:37.824427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.700 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:27.700 18:52:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:27.700 18:52:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UtJfFII5Dd 00:22:27.969 [2024-07-20 18:52:38.154515] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.969 [2024-07-20 18:52:38.154645] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:27.969 [2024-07-20 18:52:38.160159] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:27.969 [2024-07-20 18:52:38.160204] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:27.969 [2024-07-20 18:52:38.160257] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:27.969 [2024-07-20 18:52:38.160546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104aed0 (107): Transport endpoint is not connected 00:22:27.969 [2024-07-20 18:52:38.161534] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104aed0 (9): Bad file descriptor 00:22:27.969 [2024-07-20 18:52:38.162533] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:27.969 [2024-07-20 18:52:38.162552] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:27.969 [2024-07-20 18:52:38.162585] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:27.969 request: 00:22:27.969 { 00:22:27.969 "name": "TLSTEST", 00:22:27.969 "trtype": "tcp", 00:22:27.969 "traddr": "10.0.0.2", 00:22:27.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.969 "adrfam": "ipv4", 00:22:27.969 "trsvcid": "4420", 00:22:27.969 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:27.969 "psk": "/tmp/tmp.UtJfFII5Dd", 00:22:27.969 "method": "bdev_nvme_attach_controller", 00:22:27.969 "req_id": 1 00:22:27.969 } 00:22:27.969 Got JSON-RPC error response 00:22:27.969 response: 00:22:27.969 { 00:22:27.969 "code": -5, 00:22:27.969 "message": "Input/output error" 00:22:27.969 } 00:22:27.969 18:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1425816 00:22:27.969 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1425816 ']' 00:22:27.969 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1425816 00:22:27.969 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:27.969 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:27.969 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1425816 00:22:27.969 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:27.969 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:27.969 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1425816' 00:22:27.969 killing process with pid 1425816 00:22:27.969 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1425816 00:22:27.969 Received shutdown signal, test time was about 10.000000 seconds 00:22:27.969 00:22:27.969 Latency(us) 00:22:27.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.969 =================================================================================================================== 00:22:27.969 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:27.969 [2024-07-20 18:52:38.213715] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:27.969 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1425816 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1425870 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1425870 /var/tmp/bdevperf.sock 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1425870 ']' 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:28.227 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.227 [2024-07-20 18:52:38.479336] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:28.227 [2024-07-20 18:52:38.479412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1425870 ] 00:22:28.227 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.227 [2024-07-20 18:52:38.538132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.485 [2024-07-20 18:52:38.623844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.485 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:28.485 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:28.485 18:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:28.743 [2024-07-20 18:52:38.963506] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:28.743 [2024-07-20 18:52:38.965405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15665c0 (9): Bad file descriptor 00:22:28.743 [2024-07-20 18:52:38.966400] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:28.743 [2024-07-20 18:52:38.966419] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:28.743 [2024-07-20 18:52:38.966450] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:28.743 request: 00:22:28.743 { 00:22:28.743 "name": "TLSTEST", 00:22:28.743 "trtype": "tcp", 00:22:28.743 "traddr": "10.0.0.2", 00:22:28.743 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.743 "adrfam": "ipv4", 00:22:28.743 "trsvcid": "4420", 00:22:28.743 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.743 "method": "bdev_nvme_attach_controller", 00:22:28.743 "req_id": 1 00:22:28.743 } 00:22:28.743 Got JSON-RPC error response 00:22:28.743 response: 00:22:28.743 { 00:22:28.743 "code": -5, 00:22:28.743 "message": "Input/output error" 00:22:28.743 } 00:22:28.743 18:52:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1425870 00:22:28.743 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1425870 ']' 00:22:28.743 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1425870 00:22:28.743 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:28.743 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:28.743 18:52:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1425870 00:22:28.743 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:28.743 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:28.743 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1425870' 00:22:28.743 killing process with pid 1425870 00:22:28.743 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1425870 00:22:28.743 Received shutdown signal, test time was about 10.000000 seconds 00:22:28.743 00:22:28.743 Latency(us) 00:22:28.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.743 =================================================================================================================== 00:22:28.743 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:28.743 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1425870 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1422488 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1422488 ']' 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1422488 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1422488 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1422488' 00:22:29.019 killing process with pid 1422488 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1422488 00:22:29.019 [2024-07-20 18:52:39.255205] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:29.019 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1422488 00:22:29.277 18:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:29.277 18:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:29.277 18:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:29.277 18:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:29.277 18:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:29.277 18:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.kSRLDP6KZB 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.kSRLDP6KZB 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1426016 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1426016 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1426016 ']' 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:29.278 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.536 [2024-07-20 18:52:39.619584] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:29.536 [2024-07-20 18:52:39.619677] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.536 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.536 [2024-07-20 18:52:39.688000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.536 [2024-07-20 18:52:39.776447] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.536 [2024-07-20 18:52:39.776511] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.536 [2024-07-20 18:52:39.776529] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.536 [2024-07-20 18:52:39.776542] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.536 [2024-07-20 18:52:39.776554] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.536 [2024-07-20 18:52:39.776590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.793 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:29.793 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:29.793 18:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:29.793 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.793 18:52:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.793 18:52:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.793 18:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.kSRLDP6KZB 00:22:29.793 18:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kSRLDP6KZB 00:22:29.793 18:52:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:30.053 [2024-07-20 18:52:40.159054] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.053 18:52:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:30.311 18:52:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:30.311 [2024-07-20 18:52:40.624270] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:30.311 [2024-07-20 18:52:40.624512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.569 18:52:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:30.569 malloc0 00:22:30.569 18:52:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:30.827 18:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kSRLDP6KZB 00:22:31.084 [2024-07-20 18:52:41.365848] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kSRLDP6KZB 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kSRLDP6KZB' 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1426300 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1426300 /var/tmp/bdevperf.sock 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1426300 ']' 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:31.084 18:52:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.342 [2024-07-20 18:52:41.429657] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:31.342 [2024-07-20 18:52:41.429728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1426300 ] 00:22:31.342 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.342 [2024-07-20 18:52:41.490506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.342 [2024-07-20 18:52:41.576279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.599 18:52:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:31.599 18:52:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:31.599 18:52:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kSRLDP6KZB 00:22:31.599 [2024-07-20 18:52:41.902949] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:31.599 [2024-07-20 18:52:41.903080] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:31.855 TLSTESTn1 00:22:31.856 18:52:42 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:31.856 Running I/O for 10 seconds... 00:22:44.039 00:22:44.039 Latency(us) 00:22:44.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.039 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:44.039 Verification LBA range: start 0x0 length 0x2000 00:22:44.039 TLSTESTn1 : 10.11 899.67 3.51 0.00 0.00 141667.40 11505.21 219035.88 00:22:44.039 =================================================================================================================== 00:22:44.039 Total : 899.67 3.51 0.00 0.00 141667.40 11505.21 219035.88 00:22:44.039 0 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1426300 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1426300 ']' 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1426300 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1426300 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1426300' 00:22:44.039 killing process with pid 1426300 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1426300 00:22:44.039 Received shutdown signal, test time was about 10.000000 seconds 00:22:44.039 00:22:44.039 Latency(us) 00:22:44.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.039 =================================================================================================================== 00:22:44.039 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.039 [2024-07-20 18:52:52.289622] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1426300 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.kSRLDP6KZB 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kSRLDP6KZB 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kSRLDP6KZB 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kSRLDP6KZB 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kSRLDP6KZB' 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1427593 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1427593 /var/tmp/bdevperf.sock 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1427593 ']' 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.039 [2024-07-20 18:52:52.554376] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:44.039 [2024-07-20 18:52:52.554453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427593 ] 00:22:44.039 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.039 [2024-07-20 18:52:52.614463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.039 [2024-07-20 18:52:52.700741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:44.039 18:52:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kSRLDP6KZB 00:22:44.039 [2024-07-20 18:52:53.032992] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:44.039 [2024-07-20 18:52:53.033062] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:44.039 [2024-07-20 18:52:53.033091] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.kSRLDP6KZB 00:22:44.039 request: 00:22:44.039 { 00:22:44.039 "name": "TLSTEST", 00:22:44.039 "trtype": "tcp", 00:22:44.039 "traddr": "10.0.0.2", 00:22:44.039 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.039 "adrfam": "ipv4", 00:22:44.039 "trsvcid": "4420", 00:22:44.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.039 "psk": "/tmp/tmp.kSRLDP6KZB", 00:22:44.039 "method": "bdev_nvme_attach_controller", 00:22:44.039 "req_id": 1 00:22:44.039 } 00:22:44.039 Got JSON-RPC error response 00:22:44.039 response: 00:22:44.039 { 00:22:44.039 "code": -1, 00:22:44.039 "message": "Operation not permitted" 00:22:44.039 } 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1427593 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1427593 ']' 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1427593 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1427593 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1427593' 00:22:44.039 killing process with pid 1427593 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1427593 00:22:44.039 Received shutdown signal, test time was about 10.000000 seconds 00:22:44.039 00:22:44.039 Latency(us) 00:22:44.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.039 =================================================================================================================== 00:22:44.039 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1427593 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1426016 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1426016 ']' 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1426016 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1426016 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:44.039 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1426016' 00:22:44.040 killing process with pid 1426016 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1426016 00:22:44.040 [2024-07-20 18:52:53.302983] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1426016 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1427650 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1427650 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1427650 ']' 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.040 [2024-07-20 18:52:53.606423] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:44.040 [2024-07-20 18:52:53.606527] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.040 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.040 [2024-07-20 18:52:53.677540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.040 [2024-07-20 18:52:53.766529] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.040 [2024-07-20 18:52:53.766596] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.040 [2024-07-20 18:52:53.766624] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.040 [2024-07-20 18:52:53.766638] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.040 [2024-07-20 18:52:53.766651] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.040 [2024-07-20 18:52:53.766681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.kSRLDP6KZB 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.kSRLDP6KZB 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.kSRLDP6KZB 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kSRLDP6KZB 00:22:44.040 18:52:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:44.040 [2024-07-20 18:52:54.142491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.040 18:52:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:44.297 18:52:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:44.555 [2024-07-20 18:52:54.675985] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.555 [2024-07-20 18:52:54.676270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.555 18:52:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:44.812 malloc0 00:22:44.812 18:52:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:45.069 18:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kSRLDP6KZB 00:22:45.326 [2024-07-20 18:52:55.554365] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:45.326 [2024-07-20 18:52:55.554407] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:45.326 [2024-07-20 18:52:55.554445] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:45.326 request: 00:22:45.326 { 00:22:45.326 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.326 "host": "nqn.2016-06.io.spdk:host1", 00:22:45.326 "psk": "/tmp/tmp.kSRLDP6KZB", 00:22:45.326 "method": "nvmf_subsystem_add_host", 00:22:45.326 "req_id": 1 00:22:45.326 } 00:22:45.326 Got JSON-RPC error response 00:22:45.326 response: 00:22:45.326 { 00:22:45.326 "code": -32603, 00:22:45.326 "message": "Internal error" 00:22:45.326 } 00:22:45.326 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:45.326 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:45.326 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:45.326 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:45.326 18:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1427650 00:22:45.326 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1427650 ']' 00:22:45.326 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1427650 00:22:45.326 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:45.326 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:45.326 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1427650 00:22:45.326 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:45.326 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:45.326 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1427650' 00:22:45.326 killing process with pid 1427650 00:22:45.326 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1427650 00:22:45.326 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1427650 00:22:45.584 18:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.kSRLDP6KZB 00:22:45.584 18:52:55 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:45.584 18:52:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:45.584 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:45.584 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.584 18:52:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1427966 00:22:45.584 18:52:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:45.584 18:52:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1427966 00:22:45.584 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1427966 ']' 00:22:45.584 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.584 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:45.584 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.584 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:45.584 18:52:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.842 [2024-07-20 18:52:55.913923] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:45.842 [2024-07-20 18:52:55.914012] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.842 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.842 [2024-07-20 18:52:55.984715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.842 [2024-07-20 18:52:56.073810] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.842 [2024-07-20 18:52:56.073878] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.842 [2024-07-20 18:52:56.073905] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.842 [2024-07-20 18:52:56.073919] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.842 [2024-07-20 18:52:56.073931] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.842 [2024-07-20 18:52:56.073969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.100 18:52:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:46.100 18:52:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:46.100 18:52:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:46.100 18:52:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.100 18:52:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.100 18:52:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.100 18:52:56 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.kSRLDP6KZB 00:22:46.100 18:52:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kSRLDP6KZB 00:22:46.100 18:52:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:46.366 [2024-07-20 18:52:56.472260] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.366 18:52:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:46.623 18:52:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:46.623 [2024-07-20 18:52:56.937462] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:46.623 [2024-07-20 18:52:56.937722] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.881 18:52:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:46.881 malloc0 00:22:47.138 18:52:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:47.138 18:52:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kSRLDP6KZB 00:22:47.395 [2024-07-20 18:52:57.678872] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:47.395 18:52:57 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1428220 00:22:47.395 18:52:57 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.395 18:52:57 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.395 18:52:57 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1428220 /var/tmp/bdevperf.sock 00:22:47.395 18:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1428220 ']' 00:22:47.395 18:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.395 18:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:47.395 18:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.395 18:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:47.395 18:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.652 [2024-07-20 18:52:57.741495] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:47.652 [2024-07-20 18:52:57.741581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1428220 ] 00:22:47.652 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.652 [2024-07-20 18:52:57.800116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.652 [2024-07-20 18:52:57.885718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.936 18:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:47.936 18:52:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:47.936 18:52:57 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kSRLDP6KZB 00:22:47.936 [2024-07-20 18:52:58.236550] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.936 [2024-07-20 18:52:58.236677] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:48.194 TLSTESTn1 00:22:48.194 18:52:58 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:48.451 18:52:58 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:48.451 "subsystems": [ 00:22:48.451 { 00:22:48.451 "subsystem": "keyring", 00:22:48.451 "config": [] 00:22:48.451 }, 00:22:48.451 { 00:22:48.451 "subsystem": "iobuf", 00:22:48.451 "config": [ 00:22:48.451 { 00:22:48.451 "method": "iobuf_set_options", 00:22:48.451 "params": { 00:22:48.451 "small_pool_count": 8192, 00:22:48.451 "large_pool_count": 1024, 00:22:48.451 "small_bufsize": 8192, 00:22:48.451 "large_bufsize": 135168 00:22:48.451 } 00:22:48.451 } 00:22:48.451 ] 00:22:48.451 }, 00:22:48.451 { 00:22:48.451 "subsystem": "sock", 00:22:48.451 "config": [ 00:22:48.451 { 00:22:48.451 "method": "sock_set_default_impl", 00:22:48.451 "params": { 00:22:48.451 "impl_name": "posix" 00:22:48.451 } 00:22:48.451 }, 00:22:48.451 { 00:22:48.451 "method": "sock_impl_set_options", 00:22:48.451 "params": { 00:22:48.451 "impl_name": "ssl", 00:22:48.451 "recv_buf_size": 4096, 00:22:48.451 "send_buf_size": 4096, 00:22:48.451 "enable_recv_pipe": true, 00:22:48.451 "enable_quickack": false, 00:22:48.451 "enable_placement_id": 0, 00:22:48.451 "enable_zerocopy_send_server": true, 00:22:48.451 "enable_zerocopy_send_client": false, 00:22:48.451 "zerocopy_threshold": 0, 00:22:48.451 "tls_version": 0, 00:22:48.451 "enable_ktls": false 00:22:48.451 } 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "method": "sock_impl_set_options", 00:22:48.452 "params": { 00:22:48.452 "impl_name": "posix", 00:22:48.452 "recv_buf_size": 2097152, 00:22:48.452 "send_buf_size": 2097152, 00:22:48.452 "enable_recv_pipe": true, 00:22:48.452 "enable_quickack": false, 00:22:48.452 "enable_placement_id": 0, 00:22:48.452 "enable_zerocopy_send_server": true, 00:22:48.452 "enable_zerocopy_send_client": false, 00:22:48.452 "zerocopy_threshold": 0, 00:22:48.452 "tls_version": 0, 00:22:48.452 "enable_ktls": false 00:22:48.452 } 00:22:48.452 } 00:22:48.452 ] 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "subsystem": "vmd", 00:22:48.452 "config": [] 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "subsystem": "accel", 00:22:48.452 "config": [ 00:22:48.452 { 00:22:48.452 "method": "accel_set_options", 00:22:48.452 "params": { 00:22:48.452 "small_cache_size": 128, 00:22:48.452 "large_cache_size": 16, 00:22:48.452 "task_count": 2048, 00:22:48.452 "sequence_count": 2048, 00:22:48.452 "buf_count": 2048 00:22:48.452 } 00:22:48.452 } 00:22:48.452 ] 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "subsystem": "bdev", 00:22:48.452 "config": [ 00:22:48.452 { 00:22:48.452 "method": "bdev_set_options", 00:22:48.452 "params": { 00:22:48.452 "bdev_io_pool_size": 65535, 00:22:48.452 "bdev_io_cache_size": 256, 00:22:48.452 "bdev_auto_examine": true, 00:22:48.452 "iobuf_small_cache_size": 128, 00:22:48.452 "iobuf_large_cache_size": 16 00:22:48.452 } 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "method": "bdev_raid_set_options", 00:22:48.452 "params": { 00:22:48.452 "process_window_size_kb": 1024 00:22:48.452 } 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "method": "bdev_iscsi_set_options", 00:22:48.452 "params": { 00:22:48.452 "timeout_sec": 30 00:22:48.452 } 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "method": "bdev_nvme_set_options", 00:22:48.452 "params": { 00:22:48.452 "action_on_timeout": "none", 00:22:48.452 "timeout_us": 0, 00:22:48.452 "timeout_admin_us": 0, 00:22:48.452 "keep_alive_timeout_ms": 10000, 00:22:48.452 "arbitration_burst": 0, 00:22:48.452 "low_priority_weight": 0, 00:22:48.452 "medium_priority_weight": 0, 00:22:48.452 "high_priority_weight": 0, 00:22:48.452 "nvme_adminq_poll_period_us": 10000, 00:22:48.452 "nvme_ioq_poll_period_us": 0, 00:22:48.452 "io_queue_requests": 0, 00:22:48.452 "delay_cmd_submit": true, 00:22:48.452 "transport_retry_count": 4, 00:22:48.452 "bdev_retry_count": 3, 00:22:48.452 "transport_ack_timeout": 0, 00:22:48.452 "ctrlr_loss_timeout_sec": 0, 00:22:48.452 "reconnect_delay_sec": 0, 00:22:48.452 "fast_io_fail_timeout_sec": 0, 00:22:48.452 "disable_auto_failback": false, 00:22:48.452 "generate_uuids": false, 00:22:48.452 "transport_tos": 0, 00:22:48.452 "nvme_error_stat": false, 00:22:48.452 "rdma_srq_size": 0, 00:22:48.452 "io_path_stat": false, 00:22:48.452 "allow_accel_sequence": false, 00:22:48.452 "rdma_max_cq_size": 0, 00:22:48.452 "rdma_cm_event_timeout_ms": 0, 00:22:48.452 "dhchap_digests": [ 00:22:48.452 "sha256", 00:22:48.452 "sha384", 00:22:48.452 "sha512" 00:22:48.452 ], 00:22:48.452 "dhchap_dhgroups": [ 00:22:48.452 "null", 00:22:48.452 "ffdhe2048", 00:22:48.452 "ffdhe3072", 00:22:48.452 "ffdhe4096", 00:22:48.452 "ffdhe6144", 00:22:48.452 "ffdhe8192" 00:22:48.452 ] 00:22:48.452 } 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "method": "bdev_nvme_set_hotplug", 00:22:48.452 "params": { 00:22:48.452 "period_us": 100000, 00:22:48.452 "enable": false 00:22:48.452 } 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "method": "bdev_malloc_create", 00:22:48.452 "params": { 00:22:48.452 "name": "malloc0", 00:22:48.452 "num_blocks": 8192, 00:22:48.452 "block_size": 4096, 00:22:48.452 "physical_block_size": 4096, 00:22:48.452 "uuid": "289677d4-db8c-4407-8fd8-f8bda209bcf4", 00:22:48.452 "optimal_io_boundary": 0 00:22:48.452 } 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "method": "bdev_wait_for_examine" 00:22:48.452 } 00:22:48.452 ] 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "subsystem": "nbd", 00:22:48.452 "config": [] 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "subsystem": "scheduler", 00:22:48.452 "config": [ 00:22:48.452 { 00:22:48.452 "method": "framework_set_scheduler", 00:22:48.452 "params": { 00:22:48.452 "name": "static" 00:22:48.452 } 00:22:48.452 } 00:22:48.452 ] 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "subsystem": "nvmf", 00:22:48.452 "config": [ 00:22:48.452 { 00:22:48.452 "method": "nvmf_set_config", 00:22:48.452 "params": { 00:22:48.452 "discovery_filter": "match_any", 00:22:48.452 "admin_cmd_passthru": { 00:22:48.452 "identify_ctrlr": false 00:22:48.452 } 00:22:48.452 } 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "method": "nvmf_set_max_subsystems", 00:22:48.452 "params": { 00:22:48.452 "max_subsystems": 1024 00:22:48.452 } 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "method": "nvmf_set_crdt", 00:22:48.452 "params": { 00:22:48.452 "crdt1": 0, 00:22:48.452 "crdt2": 0, 00:22:48.452 "crdt3": 0 00:22:48.452 } 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "method": "nvmf_create_transport", 00:22:48.452 "params": { 00:22:48.452 "trtype": "TCP", 00:22:48.452 "max_queue_depth": 128, 00:22:48.452 "max_io_qpairs_per_ctrlr": 127, 00:22:48.452 "in_capsule_data_size": 4096, 00:22:48.452 "max_io_size": 131072, 00:22:48.452 "io_unit_size": 131072, 00:22:48.452 "max_aq_depth": 128, 00:22:48.452 "num_shared_buffers": 511, 00:22:48.452 "buf_cache_size": 4294967295, 00:22:48.452 "dif_insert_or_strip": false, 00:22:48.452 "zcopy": false, 00:22:48.452 "c2h_success": false, 00:22:48.452 "sock_priority": 0, 00:22:48.452 "abort_timeout_sec": 1, 00:22:48.452 "ack_timeout": 0, 00:22:48.452 "data_wr_pool_size": 0 00:22:48.452 } 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "method": "nvmf_create_subsystem", 00:22:48.452 "params": { 00:22:48.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.452 "allow_any_host": false, 00:22:48.452 "serial_number": "SPDK00000000000001", 00:22:48.452 "model_number": "SPDK bdev Controller", 00:22:48.452 "max_namespaces": 10, 00:22:48.452 "min_cntlid": 1, 00:22:48.452 "max_cntlid": 65519, 00:22:48.452 "ana_reporting": false 00:22:48.452 } 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "method": "nvmf_subsystem_add_host", 00:22:48.452 "params": { 00:22:48.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.452 "host": "nqn.2016-06.io.spdk:host1", 00:22:48.452 "psk": "/tmp/tmp.kSRLDP6KZB" 00:22:48.452 } 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "method": "nvmf_subsystem_add_ns", 00:22:48.452 "params": { 00:22:48.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.452 "namespace": { 00:22:48.452 "nsid": 1, 00:22:48.452 "bdev_name": "malloc0", 00:22:48.452 "nguid": "289677D4DB8C44078FD8F8BDA209BCF4", 00:22:48.452 "uuid": "289677d4-db8c-4407-8fd8-f8bda209bcf4", 00:22:48.452 "no_auto_visible": false 00:22:48.452 } 00:22:48.452 } 00:22:48.452 }, 00:22:48.452 { 00:22:48.452 "method": "nvmf_subsystem_add_listener", 00:22:48.452 "params": { 00:22:48.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.452 "listen_address": { 00:22:48.452 "trtype": "TCP", 00:22:48.452 "adrfam": "IPv4", 00:22:48.452 "traddr": "10.0.0.2", 00:22:48.452 "trsvcid": "4420" 00:22:48.452 }, 00:22:48.452 "secure_channel": true 00:22:48.452 } 00:22:48.452 } 00:22:48.452 ] 00:22:48.452 } 00:22:48.452 ] 00:22:48.452 }' 00:22:48.452 18:52:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:48.710 18:52:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:48.710 "subsystems": [ 00:22:48.710 { 00:22:48.710 "subsystem": "keyring", 00:22:48.710 "config": [] 00:22:48.710 }, 00:22:48.710 { 00:22:48.710 "subsystem": "iobuf", 00:22:48.710 "config": [ 00:22:48.710 { 00:22:48.710 "method": "iobuf_set_options", 00:22:48.710 "params": { 00:22:48.710 "small_pool_count": 8192, 00:22:48.710 "large_pool_count": 1024, 00:22:48.710 "small_bufsize": 8192, 00:22:48.710 "large_bufsize": 135168 00:22:48.710 } 00:22:48.710 } 00:22:48.710 ] 00:22:48.710 }, 00:22:48.710 { 00:22:48.710 "subsystem": "sock", 00:22:48.710 "config": [ 00:22:48.710 { 00:22:48.710 "method": "sock_set_default_impl", 00:22:48.710 "params": { 00:22:48.710 "impl_name": "posix" 00:22:48.710 } 00:22:48.710 }, 00:22:48.710 { 00:22:48.710 "method": "sock_impl_set_options", 00:22:48.710 "params": { 00:22:48.710 "impl_name": "ssl", 00:22:48.710 "recv_buf_size": 4096, 00:22:48.710 "send_buf_size": 4096, 00:22:48.710 "enable_recv_pipe": true, 00:22:48.710 "enable_quickack": false, 00:22:48.710 "enable_placement_id": 0, 00:22:48.710 "enable_zerocopy_send_server": true, 00:22:48.710 "enable_zerocopy_send_client": false, 00:22:48.710 "zerocopy_threshold": 0, 00:22:48.710 "tls_version": 0, 00:22:48.710 "enable_ktls": false 00:22:48.710 } 00:22:48.710 }, 00:22:48.710 { 00:22:48.710 "method": "sock_impl_set_options", 00:22:48.710 "params": { 00:22:48.710 "impl_name": "posix", 00:22:48.710 "recv_buf_size": 2097152, 00:22:48.710 "send_buf_size": 2097152, 00:22:48.710 "enable_recv_pipe": true, 00:22:48.710 "enable_quickack": false, 00:22:48.710 "enable_placement_id": 0, 00:22:48.710 "enable_zerocopy_send_server": true, 00:22:48.710 "enable_zerocopy_send_client": false, 00:22:48.710 "zerocopy_threshold": 0, 00:22:48.710 "tls_version": 0, 00:22:48.710 "enable_ktls": false 00:22:48.710 } 00:22:48.710 } 00:22:48.710 ] 00:22:48.710 }, 00:22:48.710 { 00:22:48.710 "subsystem": "vmd", 00:22:48.710 "config": [] 00:22:48.710 }, 00:22:48.710 { 00:22:48.710 "subsystem": "accel", 00:22:48.710 "config": [ 00:22:48.710 { 00:22:48.710 "method": "accel_set_options", 00:22:48.710 "params": { 00:22:48.710 "small_cache_size": 128, 00:22:48.710 "large_cache_size": 16, 00:22:48.710 "task_count": 2048, 00:22:48.710 "sequence_count": 2048, 00:22:48.710 "buf_count": 2048 00:22:48.710 } 00:22:48.710 } 00:22:48.710 ] 00:22:48.710 }, 00:22:48.710 { 00:22:48.710 "subsystem": "bdev", 00:22:48.710 "config": [ 00:22:48.710 { 00:22:48.710 "method": "bdev_set_options", 00:22:48.710 "params": { 00:22:48.710 "bdev_io_pool_size": 65535, 00:22:48.710 "bdev_io_cache_size": 256, 00:22:48.710 "bdev_auto_examine": true, 00:22:48.710 "iobuf_small_cache_size": 128, 00:22:48.710 "iobuf_large_cache_size": 16 00:22:48.710 } 00:22:48.710 }, 00:22:48.710 { 00:22:48.710 "method": "bdev_raid_set_options", 00:22:48.710 "params": { 00:22:48.710 "process_window_size_kb": 1024 00:22:48.710 } 00:22:48.710 }, 00:22:48.710 { 00:22:48.710 "method": "bdev_iscsi_set_options", 00:22:48.710 "params": { 00:22:48.710 "timeout_sec": 30 00:22:48.710 } 00:22:48.710 }, 00:22:48.710 { 00:22:48.710 "method": "bdev_nvme_set_options", 00:22:48.710 "params": { 00:22:48.710 "action_on_timeout": "none", 00:22:48.710 "timeout_us": 0, 00:22:48.710 "timeout_admin_us": 0, 00:22:48.710 "keep_alive_timeout_ms": 10000, 00:22:48.710 "arbitration_burst": 0, 00:22:48.710 "low_priority_weight": 0, 00:22:48.710 "medium_priority_weight": 0, 00:22:48.710 "high_priority_weight": 0, 00:22:48.710 "nvme_adminq_poll_period_us": 10000, 00:22:48.710 "nvme_ioq_poll_period_us": 0, 00:22:48.710 "io_queue_requests": 512, 00:22:48.710 "delay_cmd_submit": true, 00:22:48.710 "transport_retry_count": 4, 00:22:48.710 "bdev_retry_count": 3, 00:22:48.710 "transport_ack_timeout": 0, 00:22:48.710 "ctrlr_loss_timeout_sec": 0, 00:22:48.710 "reconnect_delay_sec": 0, 00:22:48.710 "fast_io_fail_timeout_sec": 0, 00:22:48.710 "disable_auto_failback": false, 00:22:48.710 "generate_uuids": false, 00:22:48.710 "transport_tos": 0, 00:22:48.710 "nvme_error_stat": false, 00:22:48.710 "rdma_srq_size": 0, 00:22:48.710 "io_path_stat": false, 00:22:48.710 "allow_accel_sequence": false, 00:22:48.710 "rdma_max_cq_size": 0, 00:22:48.710 "rdma_cm_event_timeout_ms": 0, 00:22:48.710 "dhchap_digests": [ 00:22:48.710 "sha256", 00:22:48.710 "sha384", 00:22:48.710 "sha512" 00:22:48.710 ], 00:22:48.710 "dhchap_dhgroups": [ 00:22:48.710 "null", 00:22:48.710 "ffdhe2048", 00:22:48.710 "ffdhe3072", 00:22:48.710 "ffdhe4096", 00:22:48.710 "ffdhe6144", 00:22:48.710 "ffdhe8192" 00:22:48.710 ] 00:22:48.710 } 00:22:48.710 }, 00:22:48.710 { 00:22:48.710 "method": "bdev_nvme_attach_controller", 00:22:48.710 "params": { 00:22:48.710 "name": "TLSTEST", 00:22:48.710 "trtype": "TCP", 00:22:48.710 "adrfam": "IPv4", 00:22:48.710 "traddr": "10.0.0.2", 00:22:48.710 "trsvcid": "4420", 00:22:48.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.710 "prchk_reftag": false, 00:22:48.710 "prchk_guard": false, 00:22:48.710 "ctrlr_loss_timeout_sec": 0, 00:22:48.710 "reconnect_delay_sec": 0, 00:22:48.710 "fast_io_fail_timeout_sec": 0, 00:22:48.710 "psk": "/tmp/tmp.kSRLDP6KZB", 00:22:48.710 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.710 "hdgst": false, 00:22:48.710 "ddgst": false 00:22:48.710 } 00:22:48.710 }, 00:22:48.710 { 00:22:48.710 "method": "bdev_nvme_set_hotplug", 00:22:48.710 "params": { 00:22:48.710 "period_us": 100000, 00:22:48.710 "enable": false 00:22:48.710 } 00:22:48.710 }, 00:22:48.710 { 00:22:48.710 "method": "bdev_wait_for_examine" 00:22:48.710 } 00:22:48.710 ] 00:22:48.710 }, 00:22:48.710 { 00:22:48.710 "subsystem": "nbd", 00:22:48.710 "config": [] 00:22:48.710 } 00:22:48.710 ] 00:22:48.710 }' 00:22:48.710 18:52:58 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1428220 00:22:48.710 18:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1428220 ']' 00:22:48.710 18:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1428220 00:22:48.710 18:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:48.710 18:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:48.710 18:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1428220 00:22:48.710 18:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:48.710 18:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:48.710 18:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1428220' 00:22:48.710 killing process with pid 1428220 00:22:48.710 18:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1428220 00:22:48.710 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.710 00:22:48.710 Latency(us) 00:22:48.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.710 =================================================================================================================== 00:22:48.710 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:48.710 [2024-07-20 18:52:58.978904] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:48.710 18:52:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1428220 00:22:48.967 18:52:59 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1427966 00:22:48.967 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1427966 ']' 00:22:48.967 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1427966 00:22:48.967 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:48.967 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:48.967 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1427966 00:22:48.967 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:48.967 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:48.967 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1427966' 00:22:48.967 killing process with pid 1427966 00:22:48.967 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1427966 00:22:48.967 [2024-07-20 18:52:59.231985] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:48.967 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1427966 00:22:49.225 18:52:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:49.225 18:52:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:49.225 18:52:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:49.225 "subsystems": [ 00:22:49.225 { 00:22:49.225 "subsystem": "keyring", 00:22:49.225 "config": [] 00:22:49.225 }, 00:22:49.225 { 00:22:49.225 "subsystem": "iobuf", 00:22:49.225 "config": [ 00:22:49.225 { 00:22:49.225 "method": "iobuf_set_options", 00:22:49.225 "params": { 00:22:49.225 "small_pool_count": 8192, 00:22:49.225 "large_pool_count": 1024, 00:22:49.225 "small_bufsize": 8192, 00:22:49.225 "large_bufsize": 135168 00:22:49.225 } 00:22:49.225 } 00:22:49.225 ] 00:22:49.225 }, 00:22:49.225 { 00:22:49.225 "subsystem": "sock", 00:22:49.225 "config": [ 00:22:49.225 { 00:22:49.225 "method": "sock_set_default_impl", 00:22:49.225 "params": { 00:22:49.225 "impl_name": "posix" 00:22:49.225 } 00:22:49.225 }, 00:22:49.225 { 00:22:49.225 "method": "sock_impl_set_options", 00:22:49.225 "params": { 00:22:49.225 "impl_name": "ssl", 00:22:49.225 "recv_buf_size": 4096, 00:22:49.225 "send_buf_size": 4096, 00:22:49.225 "enable_recv_pipe": true, 00:22:49.225 "enable_quickack": false, 00:22:49.225 "enable_placement_id": 0, 00:22:49.225 "enable_zerocopy_send_server": true, 00:22:49.225 "enable_zerocopy_send_client": false, 00:22:49.225 "zerocopy_threshold": 0, 00:22:49.225 "tls_version": 0, 00:22:49.225 "enable_ktls": false 00:22:49.225 } 00:22:49.225 }, 00:22:49.225 { 00:22:49.225 "method": "sock_impl_set_options", 00:22:49.225 "params": { 00:22:49.225 "impl_name": "posix", 00:22:49.225 "recv_buf_size": 2097152, 00:22:49.225 "send_buf_size": 2097152, 00:22:49.225 "enable_recv_pipe": true, 00:22:49.225 "enable_quickack": false, 00:22:49.225 "enable_placement_id": 0, 00:22:49.225 "enable_zerocopy_send_server": true, 00:22:49.225 "enable_zerocopy_send_client": false, 00:22:49.225 "zerocopy_threshold": 0, 00:22:49.225 "tls_version": 0, 00:22:49.225 "enable_ktls": false 00:22:49.225 } 00:22:49.225 } 00:22:49.225 ] 00:22:49.225 }, 00:22:49.225 { 00:22:49.225 "subsystem": "vmd", 00:22:49.225 "config": [] 00:22:49.225 }, 00:22:49.226 { 00:22:49.226 "subsystem": "accel", 00:22:49.226 "config": [ 00:22:49.226 { 00:22:49.226 "method": "accel_set_options", 00:22:49.226 "params": { 00:22:49.226 "small_cache_size": 128, 00:22:49.226 "large_cache_size": 16, 00:22:49.226 "task_count": 2048, 00:22:49.226 "sequence_count": 2048, 00:22:49.226 "buf_count": 2048 00:22:49.226 } 00:22:49.226 } 00:22:49.226 ] 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "subsystem": "bdev", 00:22:49.226 "config": [ 00:22:49.226 { 00:22:49.226 "method": "bdev_set_options", 00:22:49.226 "params": { 00:22:49.226 "bdev_io_pool_size": 65535, 00:22:49.226 "bdev_io_cache_size": 256, 00:22:49.226 "bdev_auto_examine": true, 00:22:49.226 "iobuf_small_cache_size": 128, 00:22:49.226 "iobuf_large_cache_size": 16 00:22:49.226 } 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "method": "bdev_raid_set_options", 00:22:49.226 "params": { 00:22:49.226 "process_window_size_kb": 1024 00:22:49.226 } 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "method": "bdev_iscsi_set_options", 00:22:49.226 "params": { 00:22:49.226 "timeout_sec": 30 00:22:49.226 } 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "method": "bdev_nvme_set_options", 00:22:49.226 "params": { 00:22:49.226 "action_on_timeout": "none", 00:22:49.226 "timeout_us": 0, 00:22:49.226 "timeout_admin_us": 0, 00:22:49.226 "keep_alive_timeout_ms": 10000, 00:22:49.226 "arbitration_burst": 0, 00:22:49.226 "low_priority_weight": 0, 00:22:49.226 "medium_priority_weight": 0, 00:22:49.226 "high_priority_weight": 0, 00:22:49.226 "nvme_adminq_poll_period_us": 10000, 00:22:49.226 "nvme_ioq_poll_period_us": 0, 00:22:49.226 "io_queue_requests": 0, 00:22:49.226 "delay_cmd_submit": true, 00:22:49.226 "transport_retry_count": 4, 00:22:49.226 "bdev_retry_count": 3, 00:22:49.226 "transport_ack_timeout": 0, 00:22:49.226 "ctrlr_loss_timeout_sec": 0, 00:22:49.226 "reconnect_delay_sec": 0, 00:22:49.226 "fast_io_fail_timeout_sec": 0, 00:22:49.226 "disable_auto_failback": false, 00:22:49.226 "generate_uuids": false, 00:22:49.226 "transport_tos": 0, 00:22:49.226 "nvme_error_stat": false, 00:22:49.226 "rdma_srq_size": 0, 00:22:49.226 "io_path_stat": false, 00:22:49.226 "allow_accel_sequence": false, 00:22:49.226 "rdma_max_cq_size": 0, 00:22:49.226 "rdma_cm_event_timeout_ms": 0, 00:22:49.226 "dhchap_digests": [ 00:22:49.226 "sha256", 00:22:49.226 "sha384", 00:22:49.226 "sha512" 00:22:49.226 ], 00:22:49.226 "dhchap_dhgroups": [ 00:22:49.226 "null", 00:22:49.226 "ffdhe2048", 00:22:49.226 "ffdhe3072", 00:22:49.226 "ffdhe4096", 00:22:49.226 "ffdhe6144", 00:22:49.226 "ffdhe8192" 00:22:49.226 ] 00:22:49.226 } 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "method": "bdev_nvme_set_hotplug", 00:22:49.226 "params": { 00:22:49.226 "period_us": 100000, 00:22:49.226 "enable": false 00:22:49.226 } 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "method": "bdev_malloc_create", 00:22:49.226 "params": { 00:22:49.226 "name": "malloc0", 00:22:49.226 "num_blocks": 8192, 00:22:49.226 "block_size": 4096, 00:22:49.226 "physical_block_size": 4096, 00:22:49.226 "uuid": "289677d4-db8c-4407-8fd8-f8bda209bcf4", 00:22:49.226 "optimal_io_boundary": 0 00:22:49.226 } 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "method": "bdev_wait_for_examine" 00:22:49.226 } 00:22:49.226 ] 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "subsystem": "nbd", 00:22:49.226 "config": [] 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "subsystem": "scheduler", 00:22:49.226 "config": [ 00:22:49.226 { 00:22:49.226 "method": "framework_set_scheduler", 00:22:49.226 "params": { 00:22:49.226 "name": "static" 00:22:49.226 } 00:22:49.226 } 00:22:49.226 ] 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "subsystem": "nvmf", 00:22:49.226 "config": [ 00:22:49.226 { 00:22:49.226 "method": "nvmf_set_config", 00:22:49.226 "params": { 00:22:49.226 "discovery_filter": "match_any", 00:22:49.226 "admin_cmd_passthru": { 00:22:49.226 "identify_ctrlr": false 00:22:49.226 } 00:22:49.226 } 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "method": "nvmf_set_max_subsystems", 00:22:49.226 "params": { 00:22:49.226 "max_subsystems": 1024 00:22:49.226 } 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "method": "nvmf_set_crdt", 00:22:49.226 "params": { 00:22:49.226 "crdt1": 0, 00:22:49.226 "crdt2": 0, 00:22:49.226 "crdt3": 0 00:22:49.226 } 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "method": "nvmf_create_transport", 00:22:49.226 "params": { 00:22:49.226 "trtype": "TCP", 00:22:49.226 "max_queue_depth": 128, 00:22:49.226 "max_io_qpairs_per_ctrlr": 127, 00:22:49.226 "in_capsule_data_size": 4096, 00:22:49.226 "max_io_size": 131072, 00:22:49.226 "io_unit_size": 131072, 00:22:49.226 "max_aq_depth": 128, 00:22:49.226 "num_shared_buffers": 511, 00:22:49.226 "buf_cache_size": 4294967295, 00:22:49.226 "dif_insert_or_strip": false, 00:22:49.226 "zcopy": false, 00:22:49.226 "c2h_success": false, 00:22:49.226 "sock_priority": 0, 00:22:49.226 "abort_timeout_sec": 1, 00:22:49.226 "ack_timeout": 0, 00:22:49.226 "data_wr_pool_size": 0 00:22:49.226 } 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "method": "nvmf_create_subsystem", 00:22:49.226 "params": { 00:22:49.226 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.226 "allow_any_host": false, 00:22:49.226 "serial_number": "SPDK00000000000001", 00:22:49.226 "model_number": "SPDK bdev Controller", 00:22:49.226 "max_namespaces": 10, 00:22:49.226 "min_cntlid": 1, 00:22:49.226 "max_cntlid": 65519, 00:22:49.226 "ana_reporting": false 00:22:49.226 } 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "method": "nvmf_subsystem_add_host", 00:22:49.226 "params": { 00:22:49.226 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.226 "host": "nqn.2016-06.io.spdk:host1", 00:22:49.226 "psk": "/tmp/tmp.kSRLDP6KZB" 00:22:49.226 } 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "method": "nvmf_subsystem_add_ns", 00:22:49.226 "params": { 00:22:49.226 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.226 "namespace": { 00:22:49.226 "nsid": 1, 00:22:49.226 "bdev_name": "malloc0", 00:22:49.226 "nguid": "289677D4DB8C44078FD8F8BDA209BCF4", 00:22:49.226 "uuid": "289677d4-db8c-4407-8fd8-f8bda209bcf4", 00:22:49.226 "no_auto_visible": false 00:22:49.226 } 00:22:49.226 } 00:22:49.226 }, 00:22:49.226 { 00:22:49.226 "method": "nvmf_subsystem_add_listener", 00:22:49.226 "params": { 00:22:49.226 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.226 "listen_address": { 00:22:49.226 "trtype": "TCP", 00:22:49.226 "adrfam": "IPv4", 00:22:49.226 "traddr": "10.0.0.2", 00:22:49.226 "trsvcid": "4420" 00:22:49.226 }, 00:22:49.226 "secure_channel": true 00:22:49.226 } 00:22:49.226 } 00:22:49.226 ] 00:22:49.226 } 00:22:49.226 ] 00:22:49.226 }' 00:22:49.226 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:49.226 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.226 18:52:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1428494 00:22:49.226 18:52:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:49.226 18:52:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1428494 00:22:49.226 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1428494 ']' 00:22:49.226 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.226 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:49.226 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.226 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:49.226 18:52:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.226 [2024-07-20 18:52:59.527928] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:49.226 [2024-07-20 18:52:59.528004] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.484 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.484 [2024-07-20 18:52:59.595641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.484 [2024-07-20 18:52:59.687185] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.484 [2024-07-20 18:52:59.687250] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.484 [2024-07-20 18:52:59.687277] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.484 [2024-07-20 18:52:59.687291] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.484 [2024-07-20 18:52:59.687303] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.484 [2024-07-20 18:52:59.687389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.741 [2024-07-20 18:52:59.914191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.741 [2024-07-20 18:52:59.930133] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:49.741 [2024-07-20 18:52:59.946199] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:49.742 [2024-07-20 18:52:59.958010] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1428642 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1428642 /var/tmp/bdevperf.sock 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1428642 ']' 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:50.304 "subsystems": [ 00:22:50.304 { 00:22:50.304 "subsystem": "keyring", 00:22:50.304 "config": [] 00:22:50.304 }, 00:22:50.304 { 00:22:50.304 "subsystem": "iobuf", 00:22:50.304 "config": [ 00:22:50.304 { 00:22:50.304 "method": "iobuf_set_options", 00:22:50.304 "params": { 00:22:50.304 "small_pool_count": 8192, 00:22:50.304 "large_pool_count": 1024, 00:22:50.304 "small_bufsize": 8192, 00:22:50.304 "large_bufsize": 135168 00:22:50.304 } 00:22:50.304 } 00:22:50.304 ] 00:22:50.304 }, 00:22:50.304 { 00:22:50.304 "subsystem": "sock", 00:22:50.304 "config": [ 00:22:50.304 { 00:22:50.304 "method": "sock_set_default_impl", 00:22:50.304 "params": { 00:22:50.304 "impl_name": "posix" 00:22:50.304 } 00:22:50.304 }, 00:22:50.304 { 00:22:50.304 "method": "sock_impl_set_options", 00:22:50.304 "params": { 00:22:50.304 "impl_name": "ssl", 00:22:50.304 "recv_buf_size": 4096, 00:22:50.304 "send_buf_size": 4096, 00:22:50.304 "enable_recv_pipe": true, 00:22:50.304 "enable_quickack": false, 00:22:50.304 "enable_placement_id": 0, 00:22:50.304 "enable_zerocopy_send_server": true, 00:22:50.304 "enable_zerocopy_send_client": false, 00:22:50.304 "zerocopy_threshold": 0, 00:22:50.304 "tls_version": 0, 00:22:50.304 "enable_ktls": false 00:22:50.304 } 00:22:50.304 }, 00:22:50.304 { 00:22:50.304 "method": "sock_impl_set_options", 00:22:50.304 "params": { 00:22:50.304 "impl_name": "posix", 00:22:50.304 "recv_buf_size": 2097152, 00:22:50.304 "send_buf_size": 2097152, 00:22:50.304 "enable_recv_pipe": true, 00:22:50.304 "enable_quickack": false, 00:22:50.304 "enable_placement_id": 0, 00:22:50.304 "enable_zerocopy_send_server": true, 00:22:50.304 "enable_zerocopy_send_client": false, 00:22:50.304 "zerocopy_threshold": 0, 00:22:50.304 "tls_version": 0, 00:22:50.304 "enable_ktls": false 00:22:50.304 } 00:22:50.304 } 00:22:50.304 ] 00:22:50.304 }, 00:22:50.304 { 00:22:50.304 "subsystem": "vmd", 00:22:50.304 "config": [] 00:22:50.304 }, 00:22:50.304 { 00:22:50.304 "subsystem": "accel", 00:22:50.304 "config": [ 00:22:50.304 { 00:22:50.304 "method": "accel_set_options", 00:22:50.304 "params": { 00:22:50.304 "small_cache_size": 128, 00:22:50.304 "large_cache_size": 16, 00:22:50.304 "task_count": 2048, 00:22:50.304 "sequence_count": 2048, 00:22:50.304 "buf_count": 2048 00:22:50.304 } 00:22:50.304 } 00:22:50.304 ] 00:22:50.304 }, 00:22:50.304 { 00:22:50.304 "subsystem": "bdev", 00:22:50.304 "config": [ 00:22:50.304 { 00:22:50.304 "method": "bdev_set_options", 00:22:50.304 "params": { 00:22:50.304 "bdev_io_pool_size": 65535, 00:22:50.304 "bdev_io_cache_size": 256, 00:22:50.304 "bdev_auto_examine": true, 00:22:50.304 "iobuf_small_cache_size": 128, 00:22:50.304 "iobuf_large_cache_size": 16 00:22:50.304 } 00:22:50.304 }, 00:22:50.304 { 00:22:50.304 "method": "bdev_raid_set_options", 00:22:50.304 "params": { 00:22:50.304 "process_window_size_kb": 1024 00:22:50.304 } 00:22:50.304 }, 00:22:50.304 { 00:22:50.304 "method": "bdev_iscsi_set_options", 00:22:50.304 "params": { 00:22:50.304 "timeout_sec": 30 00:22:50.304 } 00:22:50.304 }, 00:22:50.304 { 00:22:50.304 "method": "bdev_nvme_set_options", 00:22:50.304 "params": { 00:22:50.304 "action_on_timeout": "none", 00:22:50.304 "timeout_us": 0, 00:22:50.304 "timeout_admin_us": 0, 00:22:50.304 "keep_alive_timeout_ms": 10000, 00:22:50.304 "arbitration_burst": 0, 00:22:50.304 "low_priority_weight": 0, 00:22:50.304 "medium_priority_weight": 0, 00:22:50.304 "high_priority_weight": 0, 00:22:50.304 "nvme_adminq_poll_period_us": 10000, 00:22:50.304 "nvme_ioq_poll_period_us": 0, 00:22:50.304 "io_queue_requests": 512, 00:22:50.304 "delay_cmd_submit": true, 00:22:50.304 "transport_retry_count": 4, 00:22:50.304 "bdev_retry_count": 3, 00:22:50.304 "transport_ack_timeout": 0, 00:22:50.304 "ctrlr_loss_timeout_sec": 0, 00:22:50.304 "reconnect_delay_sec": 0, 00:22:50.304 "fast_io_fail_timeout_sec": 0, 00:22:50.304 "disable_auto_failback": false, 00:22:50.304 "generate_uuids": false, 00:22:50.304 "transport_tos": 0, 00:22:50.304 "nvme_error_stat": false, 00:22:50.304 "rdma_srq_size": 0, 00:22:50.304 "io_path_stat": false, 00:22:50.304 "allow_accel_sequence": false, 00:22:50.304 "rdma_max_cq_size": 0, 00:22:50.304 "rdma_cm_event_timeout_ms": 0, 00:22:50.304 "dhchap_digests": [ 00:22:50.304 "sha256", 00:22:50.304 "sha384", 00:22:50.304 "sha512" 00:22:50.304 ], 00:22:50.304 "dhchap_dhgroups": [ 00:22:50.304 "null", 00:22:50.304 "ffdhe2048", 00:22:50.304 "ffdhe3072", 00:22:50.304 "ffdhe4096", 00:22:50.304 "ffdhe6144", 00:22:50.304 "ffdhe8192" 00:22:50.304 ] 00:22:50.304 } 00:22:50.304 }, 00:22:50.304 { 00:22:50.304 "method": "bdev_nvme_attach_controller", 00:22:50.304 "params": { 00:22:50.304 "name": "TLSTEST", 00:22:50.304 "trtype": "TCP", 00:22:50.304 "adrfam": "IPv4", 00:22:50.304 "traddr": "10.0.0.2", 00:22:50.304 "trsvcid": "4420", 00:22:50.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.304 "prchk_reftag": false, 00:22:50.304 "prchk_guard": false, 00:22:50.304 "ctrlr_loss_timeout_sec": 0, 00:22:50.304 "reconnect_delay_sec": 0, 00:22:50.304 "fast_io_fail_timeout_sec": 0, 00:22:50.304 "psk": "/tmp/tmp.kSRLDP6KZB", 00:22:50.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.304 "hdgst": false, 00:22:50.304 "ddgst": false 00:22:50.304 } 00:22:50.304 }, 00:22:50.304 { 00:22:50.304 "method": "bdev_nvme_set_hotplug", 00:22:50.304 "params": { 00:22:50.304 "period_us": 100000, 00:22:50.304 "enable": false 00:22:50.304 } 00:22:50.304 }, 00:22:50.304 { 00:22:50.304 "method": "bdev_wait_for_examine" 00:22:50.304 } 00:22:50.304 ] 00:22:50.304 }, 00:22:50.304 { 00:22:50.304 "subsystem": "nbd", 00:22:50.304 "config": [] 00:22:50.304 } 00:22:50.304 ] 00:22:50.304 }' 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:50.304 18:53:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.304 [2024-07-20 18:53:00.581787] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:50.304 [2024-07-20 18:53:00.581887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1428642 ] 00:22:50.304 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.562 [2024-07-20 18:53:00.641958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.562 [2024-07-20 18:53:00.728414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.819 [2024-07-20 18:53:00.894991] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:50.819 [2024-07-20 18:53:00.895099] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:51.384 18:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:51.384 18:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:51.384 18:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:51.384 Running I/O for 10 seconds... 00:23:03.572 00:23:03.572 Latency(us) 00:23:03.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.572 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:03.572 Verification LBA range: start 0x0 length 0x2000 00:23:03.572 TLSTESTn1 : 10.12 735.02 2.87 0.00 0.00 173385.90 9611.95 271853.04 00:23:03.572 =================================================================================================================== 00:23:03.572 Total : 735.02 2.87 0.00 0.00 173385.90 9611.95 271853.04 00:23:03.572 0 00:23:03.572 18:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:03.572 18:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1428642 00:23:03.572 18:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1428642 ']' 00:23:03.572 18:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1428642 00:23:03.572 18:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:03.572 18:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:03.572 18:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1428642 00:23:03.572 18:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:03.572 18:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:03.572 18:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1428642' 00:23:03.572 killing process with pid 1428642 00:23:03.572 18:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1428642 00:23:03.572 Received shutdown signal, test time was about 10.000000 seconds 00:23:03.572 00:23:03.572 Latency(us) 00:23:03.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.572 =================================================================================================================== 00:23:03.572 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.572 [2024-07-20 18:53:11.814065] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:03.572 18:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1428642 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1428494 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1428494 ']' 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1428494 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1428494 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1428494' 00:23:03.572 killing process with pid 1428494 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1428494 00:23:03.572 [2024-07-20 18:53:12.059510] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1428494 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1429964 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1429964 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1429964 ']' 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.572 [2024-07-20 18:53:12.368898] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:03.572 [2024-07-20 18:53:12.368977] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.572 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.572 [2024-07-20 18:53:12.435804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.572 [2024-07-20 18:53:12.528652] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.572 [2024-07-20 18:53:12.528709] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.572 [2024-07-20 18:53:12.528736] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.572 [2024-07-20 18:53:12.528750] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.572 [2024-07-20 18:53:12.528762] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.572 [2024-07-20 18:53:12.528818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.kSRLDP6KZB 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kSRLDP6KZB 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:03.572 [2024-07-20 18:53:12.942573] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.572 18:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:03.572 18:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:03.572 [2024-07-20 18:53:13.528143] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:03.572 [2024-07-20 18:53:13.528405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.572 18:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:03.572 malloc0 00:23:03.572 18:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:03.830 18:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kSRLDP6KZB 00:23:04.087 [2024-07-20 18:53:14.282368] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:04.087 18:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1430253 00:23:04.087 18:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:04.087 18:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:04.087 18:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1430253 /var/tmp/bdevperf.sock 00:23:04.087 18:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1430253 ']' 00:23:04.087 18:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.087 18:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:04.087 18:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.087 18:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:04.087 18:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.087 [2024-07-20 18:53:14.339907] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:04.087 [2024-07-20 18:53:14.339985] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430253 ] 00:23:04.087 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.087 [2024-07-20 18:53:14.401405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.345 [2024-07-20 18:53:14.492318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.345 18:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:04.345 18:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:04.345 18:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kSRLDP6KZB 00:23:04.602 18:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:04.858 [2024-07-20 18:53:15.064484] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.858 nvme0n1 00:23:04.858 18:53:15 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:05.115 Running I/O for 1 seconds... 00:23:06.485 00:23:06.485 Latency(us) 00:23:06.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.485 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:06.485 Verification LBA range: start 0x0 length 0x2000 00:23:06.485 nvme0n1 : 1.13 646.83 2.53 0.00 0.00 189809.35 7136.14 220589.32 00:23:06.485 =================================================================================================================== 00:23:06.485 Total : 646.83 2.53 0.00 0.00 189809.35 7136.14 220589.32 00:23:06.485 0 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1430253 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1430253 ']' 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1430253 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1430253 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1430253' 00:23:06.485 killing process with pid 1430253 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1430253 00:23:06.485 Received shutdown signal, test time was about 1.000000 seconds 00:23:06.485 00:23:06.485 Latency(us) 00:23:06.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.485 =================================================================================================================== 00:23:06.485 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1430253 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1429964 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1429964 ']' 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1429964 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1429964 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1429964' 00:23:06.485 killing process with pid 1429964 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1429964 00:23:06.485 [2024-07-20 18:53:16.681331] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:06.485 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1429964 00:23:06.743 18:53:16 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:06.743 18:53:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:06.743 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:06.743 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.743 18:53:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1430533 00:23:06.743 18:53:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:06.743 18:53:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1430533 00:23:06.743 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1430533 ']' 00:23:06.743 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.743 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:06.743 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.743 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:06.743 18:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.743 [2024-07-20 18:53:16.964713] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:06.743 [2024-07-20 18:53:16.964820] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.743 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.743 [2024-07-20 18:53:17.031134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.002 [2024-07-20 18:53:17.117627] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.002 [2024-07-20 18:53:17.117686] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.002 [2024-07-20 18:53:17.117701] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.002 [2024-07-20 18:53:17.117713] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.002 [2024-07-20 18:53:17.117725] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.002 [2024-07-20 18:53:17.117751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.002 [2024-07-20 18:53:17.260984] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.002 malloc0 00:23:07.002 [2024-07-20 18:53:17.293678] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:07.002 [2024-07-20 18:53:17.293997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1430560 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1430560 /var/tmp/bdevperf.sock 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1430560 ']' 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:07.002 18:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.260 [2024-07-20 18:53:17.363931] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:07.260 [2024-07-20 18:53:17.363996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430560 ] 00:23:07.260 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.260 [2024-07-20 18:53:17.426086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.260 [2024-07-20 18:53:17.516933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.518 18:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:07.518 18:53:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:07.518 18:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kSRLDP6KZB 00:23:07.776 18:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:08.033 [2024-07-20 18:53:18.195187] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.033 nvme0n1 00:23:08.033 18:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:08.290 Running I/O for 1 seconds... 00:23:09.275 00:23:09.275 Latency(us) 00:23:09.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.275 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:09.275 Verification LBA range: start 0x0 length 0x2000 00:23:09.275 nvme0n1 : 1.14 616.18 2.41 0.00 0.00 199006.22 10485.76 214375.54 00:23:09.275 =================================================================================================================== 00:23:09.275 Total : 616.18 2.41 0.00 0.00 199006.22 10485.76 214375.54 00:23:09.275 0 00:23:09.275 18:53:19 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:09.275 18:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.275 18:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.533 18:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.533 18:53:19 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:09.533 "subsystems": [ 00:23:09.533 { 00:23:09.533 "subsystem": "keyring", 00:23:09.533 "config": [ 00:23:09.533 { 00:23:09.533 "method": "keyring_file_add_key", 00:23:09.533 "params": { 00:23:09.533 "name": "key0", 00:23:09.533 "path": "/tmp/tmp.kSRLDP6KZB" 00:23:09.533 } 00:23:09.533 } 00:23:09.533 ] 00:23:09.533 }, 00:23:09.533 { 00:23:09.533 "subsystem": "iobuf", 00:23:09.533 "config": [ 00:23:09.533 { 00:23:09.533 "method": "iobuf_set_options", 00:23:09.533 "params": { 00:23:09.533 "small_pool_count": 8192, 00:23:09.533 "large_pool_count": 1024, 00:23:09.533 "small_bufsize": 8192, 00:23:09.533 "large_bufsize": 135168 00:23:09.533 } 00:23:09.533 } 00:23:09.533 ] 00:23:09.533 }, 00:23:09.533 { 00:23:09.533 "subsystem": "sock", 00:23:09.533 "config": [ 00:23:09.533 { 00:23:09.533 "method": "sock_set_default_impl", 00:23:09.533 "params": { 00:23:09.533 "impl_name": "posix" 00:23:09.533 } 00:23:09.533 }, 00:23:09.533 { 00:23:09.533 "method": "sock_impl_set_options", 00:23:09.533 "params": { 00:23:09.533 "impl_name": "ssl", 00:23:09.533 "recv_buf_size": 4096, 00:23:09.533 "send_buf_size": 4096, 00:23:09.533 "enable_recv_pipe": true, 00:23:09.533 "enable_quickack": false, 00:23:09.533 "enable_placement_id": 0, 00:23:09.533 "enable_zerocopy_send_server": true, 00:23:09.533 "enable_zerocopy_send_client": false, 00:23:09.533 "zerocopy_threshold": 0, 00:23:09.533 "tls_version": 0, 00:23:09.533 "enable_ktls": false 00:23:09.533 } 00:23:09.533 }, 00:23:09.533 { 00:23:09.533 "method": "sock_impl_set_options", 00:23:09.533 "params": { 00:23:09.533 "impl_name": "posix", 00:23:09.533 "recv_buf_size": 2097152, 00:23:09.533 "send_buf_size": 2097152, 00:23:09.533 "enable_recv_pipe": true, 00:23:09.533 "enable_quickack": false, 00:23:09.533 "enable_placement_id": 0, 00:23:09.533 "enable_zerocopy_send_server": true, 00:23:09.533 "enable_zerocopy_send_client": false, 00:23:09.533 "zerocopy_threshold": 0, 00:23:09.533 "tls_version": 0, 00:23:09.533 "enable_ktls": false 00:23:09.533 } 00:23:09.533 } 00:23:09.533 ] 00:23:09.533 }, 00:23:09.533 { 00:23:09.533 "subsystem": "vmd", 00:23:09.533 "config": [] 00:23:09.533 }, 00:23:09.533 { 00:23:09.533 "subsystem": "accel", 00:23:09.533 "config": [ 00:23:09.533 { 00:23:09.533 "method": "accel_set_options", 00:23:09.533 "params": { 00:23:09.533 "small_cache_size": 128, 00:23:09.533 "large_cache_size": 16, 00:23:09.533 "task_count": 2048, 00:23:09.533 "sequence_count": 2048, 00:23:09.533 "buf_count": 2048 00:23:09.533 } 00:23:09.533 } 00:23:09.533 ] 00:23:09.533 }, 00:23:09.533 { 00:23:09.533 "subsystem": "bdev", 00:23:09.533 "config": [ 00:23:09.533 { 00:23:09.533 "method": "bdev_set_options", 00:23:09.533 "params": { 00:23:09.533 "bdev_io_pool_size": 65535, 00:23:09.533 "bdev_io_cache_size": 256, 00:23:09.533 "bdev_auto_examine": true, 00:23:09.533 "iobuf_small_cache_size": 128, 00:23:09.533 "iobuf_large_cache_size": 16 00:23:09.533 } 00:23:09.533 }, 00:23:09.533 { 00:23:09.533 "method": "bdev_raid_set_options", 00:23:09.533 "params": { 00:23:09.533 "process_window_size_kb": 1024 00:23:09.533 } 00:23:09.533 }, 00:23:09.533 { 00:23:09.533 "method": "bdev_iscsi_set_options", 00:23:09.533 "params": { 00:23:09.533 "timeout_sec": 30 00:23:09.533 } 00:23:09.533 }, 00:23:09.533 { 00:23:09.533 "method": "bdev_nvme_set_options", 00:23:09.533 "params": { 00:23:09.533 "action_on_timeout": "none", 00:23:09.533 "timeout_us": 0, 00:23:09.533 "timeout_admin_us": 0, 00:23:09.533 "keep_alive_timeout_ms": 10000, 00:23:09.533 "arbitration_burst": 0, 00:23:09.533 "low_priority_weight": 0, 00:23:09.533 "medium_priority_weight": 0, 00:23:09.533 "high_priority_weight": 0, 00:23:09.533 "nvme_adminq_poll_period_us": 10000, 00:23:09.533 "nvme_ioq_poll_period_us": 0, 00:23:09.533 "io_queue_requests": 0, 00:23:09.533 "delay_cmd_submit": true, 00:23:09.533 "transport_retry_count": 4, 00:23:09.533 "bdev_retry_count": 3, 00:23:09.533 "transport_ack_timeout": 0, 00:23:09.533 "ctrlr_loss_timeout_sec": 0, 00:23:09.533 "reconnect_delay_sec": 0, 00:23:09.533 "fast_io_fail_timeout_sec": 0, 00:23:09.533 "disable_auto_failback": false, 00:23:09.533 "generate_uuids": false, 00:23:09.533 "transport_tos": 0, 00:23:09.533 "nvme_error_stat": false, 00:23:09.533 "rdma_srq_size": 0, 00:23:09.533 "io_path_stat": false, 00:23:09.534 "allow_accel_sequence": false, 00:23:09.534 "rdma_max_cq_size": 0, 00:23:09.534 "rdma_cm_event_timeout_ms": 0, 00:23:09.534 "dhchap_digests": [ 00:23:09.534 "sha256", 00:23:09.534 "sha384", 00:23:09.534 "sha512" 00:23:09.534 ], 00:23:09.534 "dhchap_dhgroups": [ 00:23:09.534 "null", 00:23:09.534 "ffdhe2048", 00:23:09.534 "ffdhe3072", 00:23:09.534 "ffdhe4096", 00:23:09.534 "ffdhe6144", 00:23:09.534 "ffdhe8192" 00:23:09.534 ] 00:23:09.534 } 00:23:09.534 }, 00:23:09.534 { 00:23:09.534 "method": "bdev_nvme_set_hotplug", 00:23:09.534 "params": { 00:23:09.534 "period_us": 100000, 00:23:09.534 "enable": false 00:23:09.534 } 00:23:09.534 }, 00:23:09.534 { 00:23:09.534 "method": "bdev_malloc_create", 00:23:09.534 "params": { 00:23:09.534 "name": "malloc0", 00:23:09.534 "num_blocks": 8192, 00:23:09.534 "block_size": 4096, 00:23:09.534 "physical_block_size": 4096, 00:23:09.534 "uuid": "04173cb8-bfe2-4f6f-9ada-e77b3a319b8b", 00:23:09.534 "optimal_io_boundary": 0 00:23:09.534 } 00:23:09.534 }, 00:23:09.534 { 00:23:09.534 "method": "bdev_wait_for_examine" 00:23:09.534 } 00:23:09.534 ] 00:23:09.534 }, 00:23:09.534 { 00:23:09.534 "subsystem": "nbd", 00:23:09.534 "config": [] 00:23:09.534 }, 00:23:09.534 { 00:23:09.534 "subsystem": "scheduler", 00:23:09.534 "config": [ 00:23:09.534 { 00:23:09.534 "method": "framework_set_scheduler", 00:23:09.534 "params": { 00:23:09.534 "name": "static" 00:23:09.534 } 00:23:09.534 } 00:23:09.534 ] 00:23:09.534 }, 00:23:09.534 { 00:23:09.534 "subsystem": "nvmf", 00:23:09.534 "config": [ 00:23:09.534 { 00:23:09.534 "method": "nvmf_set_config", 00:23:09.534 "params": { 00:23:09.534 "discovery_filter": "match_any", 00:23:09.534 "admin_cmd_passthru": { 00:23:09.534 "identify_ctrlr": false 00:23:09.534 } 00:23:09.534 } 00:23:09.534 }, 00:23:09.534 { 00:23:09.534 "method": "nvmf_set_max_subsystems", 00:23:09.534 "params": { 00:23:09.534 "max_subsystems": 1024 00:23:09.534 } 00:23:09.534 }, 00:23:09.534 { 00:23:09.534 "method": "nvmf_set_crdt", 00:23:09.534 "params": { 00:23:09.534 "crdt1": 0, 00:23:09.534 "crdt2": 0, 00:23:09.534 "crdt3": 0 00:23:09.534 } 00:23:09.534 }, 00:23:09.534 { 00:23:09.534 "method": "nvmf_create_transport", 00:23:09.534 "params": { 00:23:09.534 "trtype": "TCP", 00:23:09.534 "max_queue_depth": 128, 00:23:09.534 "max_io_qpairs_per_ctrlr": 127, 00:23:09.534 "in_capsule_data_size": 4096, 00:23:09.534 "max_io_size": 131072, 00:23:09.534 "io_unit_size": 131072, 00:23:09.534 "max_aq_depth": 128, 00:23:09.534 "num_shared_buffers": 511, 00:23:09.534 "buf_cache_size": 4294967295, 00:23:09.534 "dif_insert_or_strip": false, 00:23:09.534 "zcopy": false, 00:23:09.534 "c2h_success": false, 00:23:09.534 "sock_priority": 0, 00:23:09.534 "abort_timeout_sec": 1, 00:23:09.534 "ack_timeout": 0, 00:23:09.534 "data_wr_pool_size": 0 00:23:09.534 } 00:23:09.534 }, 00:23:09.534 { 00:23:09.534 "method": "nvmf_create_subsystem", 00:23:09.534 "params": { 00:23:09.534 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.534 "allow_any_host": false, 00:23:09.534 "serial_number": "00000000000000000000", 00:23:09.534 "model_number": "SPDK bdev Controller", 00:23:09.534 "max_namespaces": 32, 00:23:09.534 "min_cntlid": 1, 00:23:09.534 "max_cntlid": 65519, 00:23:09.534 "ana_reporting": false 00:23:09.534 } 00:23:09.534 }, 00:23:09.534 { 00:23:09.534 "method": "nvmf_subsystem_add_host", 00:23:09.534 "params": { 00:23:09.534 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.534 "host": "nqn.2016-06.io.spdk:host1", 00:23:09.534 "psk": "key0" 00:23:09.534 } 00:23:09.534 }, 00:23:09.534 { 00:23:09.534 "method": "nvmf_subsystem_add_ns", 00:23:09.534 "params": { 00:23:09.534 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.534 "namespace": { 00:23:09.534 "nsid": 1, 00:23:09.534 "bdev_name": "malloc0", 00:23:09.534 "nguid": "04173CB8BFE24F6F9ADAE77B3A319B8B", 00:23:09.534 "uuid": "04173cb8-bfe2-4f6f-9ada-e77b3a319b8b", 00:23:09.534 "no_auto_visible": false 00:23:09.534 } 00:23:09.534 } 00:23:09.534 }, 00:23:09.534 { 00:23:09.534 "method": "nvmf_subsystem_add_listener", 00:23:09.534 "params": { 00:23:09.534 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.534 "listen_address": { 00:23:09.534 "trtype": "TCP", 00:23:09.534 "adrfam": "IPv4", 00:23:09.534 "traddr": "10.0.0.2", 00:23:09.534 "trsvcid": "4420" 00:23:09.534 }, 00:23:09.534 "secure_channel": true 00:23:09.534 } 00:23:09.534 } 00:23:09.534 ] 00:23:09.534 } 00:23:09.534 ] 00:23:09.534 }' 00:23:09.534 18:53:19 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:09.793 18:53:19 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:09.793 "subsystems": [ 00:23:09.793 { 00:23:09.793 "subsystem": "keyring", 00:23:09.793 "config": [ 00:23:09.793 { 00:23:09.793 "method": "keyring_file_add_key", 00:23:09.793 "params": { 00:23:09.793 "name": "key0", 00:23:09.793 "path": "/tmp/tmp.kSRLDP6KZB" 00:23:09.793 } 00:23:09.793 } 00:23:09.793 ] 00:23:09.793 }, 00:23:09.793 { 00:23:09.793 "subsystem": "iobuf", 00:23:09.793 "config": [ 00:23:09.793 { 00:23:09.793 "method": "iobuf_set_options", 00:23:09.793 "params": { 00:23:09.793 "small_pool_count": 8192, 00:23:09.793 "large_pool_count": 1024, 00:23:09.793 "small_bufsize": 8192, 00:23:09.793 "large_bufsize": 135168 00:23:09.793 } 00:23:09.793 } 00:23:09.793 ] 00:23:09.793 }, 00:23:09.793 { 00:23:09.793 "subsystem": "sock", 00:23:09.793 "config": [ 00:23:09.793 { 00:23:09.793 "method": "sock_set_default_impl", 00:23:09.793 "params": { 00:23:09.793 "impl_name": "posix" 00:23:09.793 } 00:23:09.793 }, 00:23:09.793 { 00:23:09.793 "method": "sock_impl_set_options", 00:23:09.793 "params": { 00:23:09.793 "impl_name": "ssl", 00:23:09.793 "recv_buf_size": 4096, 00:23:09.793 "send_buf_size": 4096, 00:23:09.793 "enable_recv_pipe": true, 00:23:09.793 "enable_quickack": false, 00:23:09.793 "enable_placement_id": 0, 00:23:09.793 "enable_zerocopy_send_server": true, 00:23:09.793 "enable_zerocopy_send_client": false, 00:23:09.793 "zerocopy_threshold": 0, 00:23:09.793 "tls_version": 0, 00:23:09.793 "enable_ktls": false 00:23:09.793 } 00:23:09.793 }, 00:23:09.793 { 00:23:09.793 "method": "sock_impl_set_options", 00:23:09.793 "params": { 00:23:09.793 "impl_name": "posix", 00:23:09.793 "recv_buf_size": 2097152, 00:23:09.793 "send_buf_size": 2097152, 00:23:09.793 "enable_recv_pipe": true, 00:23:09.793 "enable_quickack": false, 00:23:09.793 "enable_placement_id": 0, 00:23:09.793 "enable_zerocopy_send_server": true, 00:23:09.793 "enable_zerocopy_send_client": false, 00:23:09.793 "zerocopy_threshold": 0, 00:23:09.793 "tls_version": 0, 00:23:09.793 "enable_ktls": false 00:23:09.793 } 00:23:09.793 } 00:23:09.793 ] 00:23:09.793 }, 00:23:09.793 { 00:23:09.793 "subsystem": "vmd", 00:23:09.793 "config": [] 00:23:09.793 }, 00:23:09.793 { 00:23:09.793 "subsystem": "accel", 00:23:09.793 "config": [ 00:23:09.793 { 00:23:09.793 "method": "accel_set_options", 00:23:09.793 "params": { 00:23:09.793 "small_cache_size": 128, 00:23:09.793 "large_cache_size": 16, 00:23:09.793 "task_count": 2048, 00:23:09.793 "sequence_count": 2048, 00:23:09.793 "buf_count": 2048 00:23:09.793 } 00:23:09.793 } 00:23:09.793 ] 00:23:09.793 }, 00:23:09.793 { 00:23:09.793 "subsystem": "bdev", 00:23:09.793 "config": [ 00:23:09.793 { 00:23:09.793 "method": "bdev_set_options", 00:23:09.793 "params": { 00:23:09.793 "bdev_io_pool_size": 65535, 00:23:09.793 "bdev_io_cache_size": 256, 00:23:09.793 "bdev_auto_examine": true, 00:23:09.793 "iobuf_small_cache_size": 128, 00:23:09.793 "iobuf_large_cache_size": 16 00:23:09.793 } 00:23:09.793 }, 00:23:09.793 { 00:23:09.793 "method": "bdev_raid_set_options", 00:23:09.793 "params": { 00:23:09.793 "process_window_size_kb": 1024 00:23:09.793 } 00:23:09.793 }, 00:23:09.793 { 00:23:09.793 "method": "bdev_iscsi_set_options", 00:23:09.793 "params": { 00:23:09.793 "timeout_sec": 30 00:23:09.793 } 00:23:09.793 }, 00:23:09.793 { 00:23:09.793 "method": "bdev_nvme_set_options", 00:23:09.793 "params": { 00:23:09.793 "action_on_timeout": "none", 00:23:09.793 "timeout_us": 0, 00:23:09.793 "timeout_admin_us": 0, 00:23:09.793 "keep_alive_timeout_ms": 10000, 00:23:09.793 "arbitration_burst": 0, 00:23:09.793 "low_priority_weight": 0, 00:23:09.793 "medium_priority_weight": 0, 00:23:09.793 "high_priority_weight": 0, 00:23:09.793 "nvme_adminq_poll_period_us": 10000, 00:23:09.793 "nvme_ioq_poll_period_us": 0, 00:23:09.793 "io_queue_requests": 512, 00:23:09.793 "delay_cmd_submit": true, 00:23:09.793 "transport_retry_count": 4, 00:23:09.793 "bdev_retry_count": 3, 00:23:09.793 "transport_ack_timeout": 0, 00:23:09.793 "ctrlr_loss_timeout_sec": 0, 00:23:09.793 "reconnect_delay_sec": 0, 00:23:09.793 "fast_io_fail_timeout_sec": 0, 00:23:09.793 "disable_auto_failback": false, 00:23:09.793 "generate_uuids": false, 00:23:09.793 "transport_tos": 0, 00:23:09.793 "nvme_error_stat": false, 00:23:09.793 "rdma_srq_size": 0, 00:23:09.793 "io_path_stat": false, 00:23:09.793 "allow_accel_sequence": false, 00:23:09.793 "rdma_max_cq_size": 0, 00:23:09.793 "rdma_cm_event_timeout_ms": 0, 00:23:09.793 "dhchap_digests": [ 00:23:09.793 "sha256", 00:23:09.793 "sha384", 00:23:09.793 "sha512" 00:23:09.793 ], 00:23:09.793 "dhchap_dhgroups": [ 00:23:09.793 "null", 00:23:09.793 "ffdhe2048", 00:23:09.793 "ffdhe3072", 00:23:09.793 "ffdhe4096", 00:23:09.793 "ffdhe6144", 00:23:09.793 "ffdhe8192" 00:23:09.793 ] 00:23:09.793 } 00:23:09.793 }, 00:23:09.793 { 00:23:09.793 "method": "bdev_nvme_attach_controller", 00:23:09.793 "params": { 00:23:09.793 "name": "nvme0", 00:23:09.793 "trtype": "TCP", 00:23:09.793 "adrfam": "IPv4", 00:23:09.793 "traddr": "10.0.0.2", 00:23:09.793 "trsvcid": "4420", 00:23:09.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.793 "prchk_reftag": false, 00:23:09.793 "prchk_guard": false, 00:23:09.793 "ctrlr_loss_timeout_sec": 0, 00:23:09.793 "reconnect_delay_sec": 0, 00:23:09.793 "fast_io_fail_timeout_sec": 0, 00:23:09.793 "psk": "key0", 00:23:09.793 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.793 "hdgst": false, 00:23:09.793 "ddgst": false 00:23:09.793 } 00:23:09.793 }, 00:23:09.793 { 00:23:09.793 "method": "bdev_nvme_set_hotplug", 00:23:09.793 "params": { 00:23:09.793 "period_us": 100000, 00:23:09.793 "enable": false 00:23:09.793 } 00:23:09.793 }, 00:23:09.793 { 00:23:09.793 "method": "bdev_enable_histogram", 00:23:09.793 "params": { 00:23:09.793 "name": "nvme0n1", 00:23:09.793 "enable": true 00:23:09.793 } 00:23:09.793 }, 00:23:09.793 { 00:23:09.793 "method": "bdev_wait_for_examine" 00:23:09.793 } 00:23:09.793 ] 00:23:09.793 }, 00:23:09.793 { 00:23:09.793 "subsystem": "nbd", 00:23:09.793 "config": [] 00:23:09.793 } 00:23:09.794 ] 00:23:09.794 }' 00:23:09.794 18:53:19 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1430560 00:23:09.794 18:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1430560 ']' 00:23:09.794 18:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1430560 00:23:09.794 18:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:09.794 18:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:09.794 18:53:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1430560 00:23:09.794 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:09.794 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:09.794 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1430560' 00:23:09.794 killing process with pid 1430560 00:23:09.794 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1430560 00:23:09.794 Received shutdown signal, test time was about 1.000000 seconds 00:23:09.794 00:23:09.794 Latency(us) 00:23:09.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.794 =================================================================================================================== 00:23:09.794 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.794 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1430560 00:23:10.052 18:53:20 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1430533 00:23:10.052 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1430533 ']' 00:23:10.052 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1430533 00:23:10.052 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:10.052 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:10.052 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1430533 00:23:10.052 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:10.052 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:10.052 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1430533' 00:23:10.052 killing process with pid 1430533 00:23:10.052 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1430533 00:23:10.052 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1430533 00:23:10.312 18:53:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:10.312 18:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:10.312 18:53:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:10.312 "subsystems": [ 00:23:10.312 { 00:23:10.312 "subsystem": "keyring", 00:23:10.312 "config": [ 00:23:10.312 { 00:23:10.312 "method": "keyring_file_add_key", 00:23:10.312 "params": { 00:23:10.312 "name": "key0", 00:23:10.312 "path": "/tmp/tmp.kSRLDP6KZB" 00:23:10.312 } 00:23:10.312 } 00:23:10.312 ] 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "subsystem": "iobuf", 00:23:10.312 "config": [ 00:23:10.312 { 00:23:10.312 "method": "iobuf_set_options", 00:23:10.312 "params": { 00:23:10.312 "small_pool_count": 8192, 00:23:10.312 "large_pool_count": 1024, 00:23:10.312 "small_bufsize": 8192, 00:23:10.312 "large_bufsize": 135168 00:23:10.312 } 00:23:10.312 } 00:23:10.312 ] 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "subsystem": "sock", 00:23:10.312 "config": [ 00:23:10.312 { 00:23:10.312 "method": "sock_set_default_impl", 00:23:10.312 "params": { 00:23:10.312 "impl_name": "posix" 00:23:10.312 } 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "method": "sock_impl_set_options", 00:23:10.312 "params": { 00:23:10.312 "impl_name": "ssl", 00:23:10.312 "recv_buf_size": 4096, 00:23:10.312 "send_buf_size": 4096, 00:23:10.312 "enable_recv_pipe": true, 00:23:10.312 "enable_quickack": false, 00:23:10.312 "enable_placement_id": 0, 00:23:10.312 "enable_zerocopy_send_server": true, 00:23:10.312 "enable_zerocopy_send_client": false, 00:23:10.312 "zerocopy_threshold": 0, 00:23:10.312 "tls_version": 0, 00:23:10.312 "enable_ktls": false 00:23:10.312 } 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "method": "sock_impl_set_options", 00:23:10.312 "params": { 00:23:10.312 "impl_name": "posix", 00:23:10.312 "recv_buf_size": 2097152, 00:23:10.312 "send_buf_size": 2097152, 00:23:10.312 "enable_recv_pipe": true, 00:23:10.312 "enable_quickack": false, 00:23:10.312 "enable_placement_id": 0, 00:23:10.312 "enable_zerocopy_send_server": true, 00:23:10.312 "enable_zerocopy_send_client": false, 00:23:10.312 "zerocopy_threshold": 0, 00:23:10.312 "tls_version": 0, 00:23:10.312 "enable_ktls": false 00:23:10.312 } 00:23:10.312 } 00:23:10.312 ] 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "subsystem": "vmd", 00:23:10.312 "config": [] 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "subsystem": "accel", 00:23:10.312 "config": [ 00:23:10.312 { 00:23:10.312 "method": "accel_set_options", 00:23:10.312 "params": { 00:23:10.312 "small_cache_size": 128, 00:23:10.312 "large_cache_size": 16, 00:23:10.312 "task_count": 2048, 00:23:10.312 "sequence_count": 2048, 00:23:10.312 "buf_count": 2048 00:23:10.312 } 00:23:10.312 } 00:23:10.312 ] 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "subsystem": "bdev", 00:23:10.312 "config": [ 00:23:10.312 { 00:23:10.312 "method": "bdev_set_options", 00:23:10.312 "params": { 00:23:10.312 "bdev_io_pool_size": 65535, 00:23:10.312 "bdev_io_cache_size": 256, 00:23:10.312 "bdev_auto_examine": true, 00:23:10.312 "iobuf_small_cache_size": 128, 00:23:10.312 "iobuf_large_cache_size": 16 00:23:10.312 } 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "method": "bdev_raid_set_options", 00:23:10.312 "params": { 00:23:10.312 "process_window_size_kb": 1024 00:23:10.312 } 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "method": "bdev_iscsi_set_options", 00:23:10.312 "params": { 00:23:10.312 "timeout_sec": 30 00:23:10.312 } 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "method": "bdev_nvme_set_options", 00:23:10.312 "params": { 00:23:10.312 "action_on_timeout": "none", 00:23:10.312 "timeout_us": 0, 00:23:10.312 "timeout_admin_us": 0, 00:23:10.312 "keep_alive_timeout_ms": 10000, 00:23:10.312 "arbitration_burst": 0, 00:23:10.312 "low_priority_weight": 0, 00:23:10.312 "medium_priority_weight": 0, 00:23:10.312 "high_priority_weight": 0, 00:23:10.312 "nvme_adminq_poll_period_us": 10000, 00:23:10.312 "nvme_ioq_poll_period_us": 0, 00:23:10.312 "io_queue_requests": 0, 00:23:10.312 "delay_cmd_submit": true, 00:23:10.312 "transport_retry_count": 4, 00:23:10.312 "bdev_retry_count": 3, 00:23:10.312 "transport_ack_timeout": 0, 00:23:10.312 "ctrlr_loss_timeout_sec": 0, 00:23:10.312 "reconnect_delay_sec": 0, 00:23:10.312 "fast_io_fail_timeout_sec": 0, 00:23:10.312 "disable_auto_failback": false, 00:23:10.312 "generate_uuids": false, 00:23:10.312 "transport_tos": 0, 00:23:10.312 "nvme_error_stat": false, 00:23:10.312 "rdma_srq_size": 0, 00:23:10.312 "io_path_stat": false, 00:23:10.312 "allow_accel_sequence": false, 00:23:10.312 "rdma_max_cq_size": 0, 00:23:10.312 "rdma_cm_event_timeout_ms": 0, 00:23:10.312 "dhchap_digests": [ 00:23:10.312 "sha256", 00:23:10.312 "sha384", 00:23:10.312 "sha512" 00:23:10.312 ], 00:23:10.312 "dhchap_dhgroups": [ 00:23:10.312 "null", 00:23:10.312 "ffdhe2048", 00:23:10.312 "ffdhe3072", 00:23:10.312 "ffdhe4096", 00:23:10.312 "ffdhe6144", 00:23:10.312 "ffdhe8192" 00:23:10.312 ] 00:23:10.312 } 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "method": "bdev_nvme_set_hotplug", 00:23:10.312 "params": { 00:23:10.312 "period_us": 100000, 00:23:10.312 "enable": false 00:23:10.312 } 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "method": "bdev_malloc_create", 00:23:10.312 "params": { 00:23:10.312 "name": "malloc0", 00:23:10.312 "num_blocks": 8192, 00:23:10.312 "block_size": 4096, 00:23:10.312 "physical_block_size": 4096, 00:23:10.312 "uuid": "04173cb8-bfe2-4f6f-9ada-e77b3a319b8b", 00:23:10.312 "optimal_io_boundary": 0 00:23:10.312 } 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "method": "bdev_wait_for_examine" 00:23:10.312 } 00:23:10.312 ] 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "subsystem": "nbd", 00:23:10.312 "config": [] 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "subsystem": "scheduler", 00:23:10.312 "config": [ 00:23:10.312 { 00:23:10.312 "method": "framework_set_scheduler", 00:23:10.312 "params": { 00:23:10.312 "name": "static" 00:23:10.312 } 00:23:10.312 } 00:23:10.312 ] 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "subsystem": "nvmf", 00:23:10.312 "config": [ 00:23:10.312 { 00:23:10.312 "method": "nvmf_set_config", 00:23:10.312 "params": { 00:23:10.312 "discovery_filter": "match_any", 00:23:10.312 "admin_cmd_passthru": { 00:23:10.312 "identify_ctrlr": false 00:23:10.312 } 00:23:10.312 } 00:23:10.312 }, 00:23:10.312 { 00:23:10.312 "method": "nvmf_set_max_subsystems", 00:23:10.312 "params": { 00:23:10.312 "max_subsystems": 1024 00:23:10.313 } 00:23:10.313 }, 00:23:10.313 { 00:23:10.313 "method": "nvmf_set_crdt", 00:23:10.313 "params": { 00:23:10.313 "crdt1": 0, 00:23:10.313 "crdt2": 0, 00:23:10.313 "crdt3": 0 00:23:10.313 } 00:23:10.313 }, 00:23:10.313 { 00:23:10.313 "method": "nvmf_create_transport", 00:23:10.313 "params": { 00:23:10.313 "trtype": "TCP", 00:23:10.313 "max_queue_depth": 128, 00:23:10.313 "max_io_qpairs_per_ctrlr": 127, 00:23:10.313 "in_capsule_data_size": 4096, 00:23:10.313 "max_io_size": 131072, 00:23:10.313 "io_unit_size": 131072, 00:23:10.313 "max_aq_depth": 128, 00:23:10.313 "num_shared_buffers": 511, 00:23:10.313 "buf_cache_size": 4294967295, 00:23:10.313 "dif_insert_or_strip": false, 00:23:10.313 "zcopy": false, 00:23:10.313 "c2h_success": false, 00:23:10.313 "sock_priority": 0, 00:23:10.313 "abort_timeout_sec": 1, 00:23:10.313 "ack_timeout": 0, 00:23:10.313 "data_wr_pool_size": 0 00:23:10.313 } 00:23:10.313 }, 00:23:10.313 { 00:23:10.313 "method": "nvmf_create_subsystem", 00:23:10.313 "params": { 00:23:10.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.313 "allow_any_host": false, 00:23:10.313 "serial_number": "00000000000000000000", 00:23:10.313 "model_number": "SPDK bdev Controller", 00:23:10.313 "max_namespaces": 32, 00:23:10.313 "min_cntlid": 1, 00:23:10.313 "max_cntlid": 65519, 00:23:10.313 "ana_reporting": false 00:23:10.313 } 00:23:10.313 }, 00:23:10.313 { 00:23:10.313 "method": "nvmf_subsystem_add_host", 00:23:10.313 "params": { 00:23:10.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.313 "host": "nqn.2016-06.io.spdk:host1", 00:23:10.313 "psk": "key0" 00:23:10.313 } 00:23:10.313 }, 00:23:10.313 { 00:23:10.313 "method": "nvmf_subsystem_add_ns", 00:23:10.313 "params": { 00:23:10.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.313 "namespace": { 00:23:10.313 "nsid": 1, 00:23:10.313 "bdev_name": "malloc0", 00:23:10.313 "nguid": "04173CB8BFE24F6F9ADAE77B3A319B8B", 00:23:10.313 "uuid": "04173cb8-bfe2-4f6f-9ada-e77b3a319b8b", 00:23:10.313 "no_auto_visible": false 00:23:10.313 } 00:23:10.313 } 00:23:10.313 }, 00:23:10.313 { 00:23:10.313 "method": "nvmf_subsystem_add_listener", 00:23:10.313 "params": { 00:23:10.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.313 "listen_address": { 00:23:10.313 "trtype": "TCP", 00:23:10.313 "adrfam": "IPv4", 00:23:10.313 "traddr": "10.0.0.2", 00:23:10.313 "trsvcid": "4420" 00:23:10.313 }, 00:23:10.313 "secure_channel": true 00:23:10.313 } 00:23:10.313 } 00:23:10.313 ] 00:23:10.313 } 00:23:10.313 ] 00:23:10.313 }' 00:23:10.313 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:10.313 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.313 18:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1430967 00:23:10.313 18:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:10.313 18:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1430967 00:23:10.313 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1430967 ']' 00:23:10.313 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.313 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:10.313 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.313 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:10.313 18:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.313 [2024-07-20 18:53:20.525279] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:10.313 [2024-07-20 18:53:20.525365] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.313 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.313 [2024-07-20 18:53:20.589692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.572 [2024-07-20 18:53:20.677423] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.572 [2024-07-20 18:53:20.677487] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.572 [2024-07-20 18:53:20.677500] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.572 [2024-07-20 18:53:20.677512] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.572 [2024-07-20 18:53:20.677521] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.572 [2024-07-20 18:53:20.677621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.830 [2024-07-20 18:53:20.921428] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.830 [2024-07-20 18:53:20.953431] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.830 [2024-07-20 18:53:20.963999] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.397 18:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:11.397 18:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:11.397 18:53:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:11.397 18:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.397 18:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.397 18:53:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.397 18:53:21 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1431118 00:23:11.397 18:53:21 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1431118 /var/tmp/bdevperf.sock 00:23:11.397 18:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1431118 ']' 00:23:11.397 18:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.397 18:53:21 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:11.397 18:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:11.397 18:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.397 18:53:21 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:11.397 "subsystems": [ 00:23:11.397 { 00:23:11.397 "subsystem": "keyring", 00:23:11.397 "config": [ 00:23:11.397 { 00:23:11.397 "method": "keyring_file_add_key", 00:23:11.397 "params": { 00:23:11.397 "name": "key0", 00:23:11.397 "path": "/tmp/tmp.kSRLDP6KZB" 00:23:11.397 } 00:23:11.397 } 00:23:11.397 ] 00:23:11.397 }, 00:23:11.397 { 00:23:11.397 "subsystem": "iobuf", 00:23:11.397 "config": [ 00:23:11.397 { 00:23:11.397 "method": "iobuf_set_options", 00:23:11.397 "params": { 00:23:11.397 "small_pool_count": 8192, 00:23:11.397 "large_pool_count": 1024, 00:23:11.397 "small_bufsize": 8192, 00:23:11.397 "large_bufsize": 135168 00:23:11.397 } 00:23:11.397 } 00:23:11.397 ] 00:23:11.397 }, 00:23:11.397 { 00:23:11.397 "subsystem": "sock", 00:23:11.397 "config": [ 00:23:11.397 { 00:23:11.397 "method": "sock_set_default_impl", 00:23:11.397 "params": { 00:23:11.397 "impl_name": "posix" 00:23:11.397 } 00:23:11.397 }, 00:23:11.397 { 00:23:11.397 "method": "sock_impl_set_options", 00:23:11.397 "params": { 00:23:11.397 "impl_name": "ssl", 00:23:11.397 "recv_buf_size": 4096, 00:23:11.397 "send_buf_size": 4096, 00:23:11.397 "enable_recv_pipe": true, 00:23:11.397 "enable_quickack": false, 00:23:11.397 "enable_placement_id": 0, 00:23:11.397 "enable_zerocopy_send_server": true, 00:23:11.397 "enable_zerocopy_send_client": false, 00:23:11.397 "zerocopy_threshold": 0, 00:23:11.397 "tls_version": 0, 00:23:11.397 "enable_ktls": false 00:23:11.397 } 00:23:11.397 }, 00:23:11.397 { 00:23:11.397 "method": "sock_impl_set_options", 00:23:11.397 "params": { 00:23:11.397 "impl_name": "posix", 00:23:11.397 "recv_buf_size": 2097152, 00:23:11.397 "send_buf_size": 2097152, 00:23:11.397 "enable_recv_pipe": true, 00:23:11.397 "enable_quickack": false, 00:23:11.397 "enable_placement_id": 0, 00:23:11.397 "enable_zerocopy_send_server": true, 00:23:11.397 "enable_zerocopy_send_client": false, 00:23:11.397 "zerocopy_threshold": 0, 00:23:11.397 "tls_version": 0, 00:23:11.397 "enable_ktls": false 00:23:11.397 } 00:23:11.397 } 00:23:11.397 ] 00:23:11.397 }, 00:23:11.397 { 00:23:11.397 "subsystem": "vmd", 00:23:11.397 "config": [] 00:23:11.397 }, 00:23:11.397 { 00:23:11.397 "subsystem": "accel", 00:23:11.397 "config": [ 00:23:11.397 { 00:23:11.397 "method": "accel_set_options", 00:23:11.397 "params": { 00:23:11.397 "small_cache_size": 128, 00:23:11.397 "large_cache_size": 16, 00:23:11.397 "task_count": 2048, 00:23:11.397 "sequence_count": 2048, 00:23:11.397 "buf_count": 2048 00:23:11.397 } 00:23:11.397 } 00:23:11.397 ] 00:23:11.397 }, 00:23:11.397 { 00:23:11.397 "subsystem": "bdev", 00:23:11.397 "config": [ 00:23:11.397 { 00:23:11.397 "method": "bdev_set_options", 00:23:11.397 "params": { 00:23:11.397 "bdev_io_pool_size": 65535, 00:23:11.397 "bdev_io_cache_size": 256, 00:23:11.397 "bdev_auto_examine": true, 00:23:11.397 "iobuf_small_cache_size": 128, 00:23:11.397 "iobuf_large_cache_size": 16 00:23:11.397 } 00:23:11.397 }, 00:23:11.397 { 00:23:11.397 "method": "bdev_raid_set_options", 00:23:11.397 "params": { 00:23:11.397 "process_window_size_kb": 1024 00:23:11.397 } 00:23:11.397 }, 00:23:11.397 { 00:23:11.397 "method": "bdev_iscsi_set_options", 00:23:11.397 "params": { 00:23:11.397 "timeout_sec": 30 00:23:11.398 } 00:23:11.398 }, 00:23:11.398 { 00:23:11.398 "method": "bdev_nvme_set_options", 00:23:11.398 "params": { 00:23:11.398 "action_on_timeout": "none", 00:23:11.398 "timeout_us": 0, 00:23:11.398 "timeout_admin_us": 0, 00:23:11.398 "keep_alive_timeout_ms": 10000, 00:23:11.398 "arbitration_burst": 0, 00:23:11.398 "low_priority_weight": 0, 00:23:11.398 "medium_priority_weight": 0, 00:23:11.398 "high_priority_weight": 0, 00:23:11.398 "nvme_adminq_poll_period_us": 10000, 00:23:11.398 "nvme_ioq_poll_period_us": 0, 00:23:11.398 "io_queue_requests": 512, 00:23:11.398 "delay_cmd_submit": true, 00:23:11.398 "transport_retry_count": 4, 00:23:11.398 "bdev_retry_count": 3, 00:23:11.398 "transport_ack_timeout": 0, 00:23:11.398 "ctrlr_loss_timeout_sec": 0, 00:23:11.398 "reconnect_delay_sec": 0, 00:23:11.398 "fast_io_fail_timeout_sec": 0, 00:23:11.398 "disable_auto_failback": false, 00:23:11.398 "generate_uuids": false, 00:23:11.398 "transport_tos": 0, 00:23:11.398 "nvme_error_stat": false, 00:23:11.398 "rdma_srq_size": 0, 00:23:11.398 "io_path_stat": false, 00:23:11.398 "allow_accel_sequence": false, 00:23:11.398 "rdma_max_cq_size": 0, 00:23:11.398 "rdma_cm_event_timeout_ms": 0, 00:23:11.398 "dhchap_digests": [ 00:23:11.398 "sha256", 00:23:11.398 "sha384", 00:23:11.398 "shWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.398 a512" 00:23:11.398 ], 00:23:11.398 "dhchap_dhgroups": [ 00:23:11.398 "null", 00:23:11.398 "ffdhe2048", 00:23:11.398 "ffdhe3072", 00:23:11.398 "ffdhe4096", 00:23:11.398 "ffdhe6144", 00:23:11.398 "ffdhe8192" 00:23:11.398 ] 00:23:11.398 } 00:23:11.398 }, 00:23:11.398 { 00:23:11.398 "method": "bdev_nvme_attach_controller", 00:23:11.398 "params": { 00:23:11.398 "name": "nvme0", 00:23:11.398 "trtype": "TCP", 00:23:11.398 "adrfam": "IPv4", 00:23:11.398 "traddr": "10.0.0.2", 00:23:11.398 "trsvcid": "4420", 00:23:11.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.398 "prchk_reftag": false, 00:23:11.398 "prchk_guard": false, 00:23:11.398 "ctrlr_loss_timeout_sec": 0, 00:23:11.398 "reconnect_delay_sec": 0, 00:23:11.398 "fast_io_fail_timeout_sec": 0, 00:23:11.398 "psk": "key0", 00:23:11.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.398 "hdgst": false, 00:23:11.398 "ddgst": false 00:23:11.398 } 00:23:11.398 }, 00:23:11.398 { 00:23:11.398 "method": "bdev_nvme_set_hotplug", 00:23:11.398 "params": { 00:23:11.398 "period_us": 100000, 00:23:11.398 "enable": false 00:23:11.398 } 00:23:11.398 }, 00:23:11.398 { 00:23:11.398 "method": "bdev_enable_histogram", 00:23:11.398 "params": { 00:23:11.398 "name": "nvme0n1", 00:23:11.398 "enable": true 00:23:11.398 } 00:23:11.398 }, 00:23:11.398 { 00:23:11.398 "method": "bdev_wait_for_examine" 00:23:11.398 } 00:23:11.398 ] 00:23:11.398 }, 00:23:11.398 { 00:23:11.398 "subsystem": "nbd", 00:23:11.398 "config": [] 00:23:11.398 } 00:23:11.398 ] 00:23:11.398 }' 00:23:11.398 18:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:11.398 18:53:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.398 [2024-07-20 18:53:21.573781] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:11.398 [2024-07-20 18:53:21.573876] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431118 ] 00:23:11.398 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.398 [2024-07-20 18:53:21.634956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.659 [2024-07-20 18:53:21.721918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.659 [2024-07-20 18:53:21.900273] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.593 18:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.593 18:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:12.593 18:53:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:12.593 18:53:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:12.593 18:53:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.593 18:53:22 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:12.864 Running I/O for 1 seconds... 00:23:13.795 00:23:13.795 Latency(us) 00:23:13.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.795 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:13.795 Verification LBA range: start 0x0 length 0x2000 00:23:13.795 nvme0n1 : 1.12 665.76 2.60 0.00 0.00 185749.24 6796.33 222142.77 00:23:13.795 =================================================================================================================== 00:23:13.795 Total : 665.76 2.60 0.00 0.00 185749.24 6796.33 222142.77 00:23:13.795 0 00:23:13.795 18:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:13.795 18:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:13.795 18:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:13.795 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:13.795 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:13.795 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:13.795 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:13.795 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:13.795 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:13.795 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:13.795 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:13.795 nvmf_trace.0 00:23:14.051 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:14.051 18:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1431118 00:23:14.051 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1431118 ']' 00:23:14.051 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1431118 00:23:14.051 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:14.051 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:14.051 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1431118 00:23:14.051 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:14.051 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:14.051 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1431118' 00:23:14.051 killing process with pid 1431118 00:23:14.051 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1431118 00:23:14.051 Received shutdown signal, test time was about 1.000000 seconds 00:23:14.051 00:23:14.051 Latency(us) 00:23:14.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.051 =================================================================================================================== 00:23:14.051 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.051 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1431118 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:14.309 rmmod nvme_tcp 00:23:14.309 rmmod nvme_fabrics 00:23:14.309 rmmod nvme_keyring 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1430967 ']' 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1430967 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1430967 ']' 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1430967 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1430967 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1430967' 00:23:14.309 killing process with pid 1430967 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1430967 00:23:14.309 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1430967 00:23:14.566 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:14.566 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:14.566 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:14.566 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:14.566 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:14.566 18:53:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.566 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.566 18:53:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.464 18:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:16.464 18:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.UtJfFII5Dd /tmp/tmp.1VRy3WDkZn /tmp/tmp.kSRLDP6KZB 00:23:16.464 00:23:16.464 real 1m19.520s 00:23:16.464 user 2m7.435s 00:23:16.464 sys 0m27.421s 00:23:16.464 18:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:16.464 18:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.464 ************************************ 00:23:16.464 END TEST nvmf_tls 00:23:16.464 ************************************ 00:23:16.723 18:53:26 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:16.723 18:53:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:16.723 18:53:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:16.723 18:53:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.723 ************************************ 00:23:16.723 START TEST nvmf_fips 00:23:16.723 ************************************ 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:16.723 * Looking for test storage... 00:23:16.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:16.723 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:16.724 18:53:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:16.724 Error setting digest 00:23:16.724 0002F25CA37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:16.724 0002F25CA37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:16.724 18:53:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:19.266 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:19.266 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:19.267 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:19.267 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:19.267 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:19.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:23:19.267 00:23:19.267 --- 10.0.0.2 ping statistics --- 00:23:19.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.267 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:19.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:23:19.267 00:23:19.267 --- 10.0.0.1 ping statistics --- 00:23:19.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.267 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1433479 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1433479 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1433479 ']' 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:19.267 [2024-07-20 18:53:29.288014] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:19.267 [2024-07-20 18:53:29.288112] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.267 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.267 [2024-07-20 18:53:29.350893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.267 [2024-07-20 18:53:29.434441] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.267 [2024-07-20 18:53:29.434492] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.267 [2024-07-20 18:53:29.434516] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.267 [2024-07-20 18:53:29.434527] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.267 [2024-07-20 18:53:29.434536] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.267 [2024-07-20 18:53:29.434561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:19.267 18:53:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:19.833 [2024-07-20 18:53:29.851290] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.833 [2024-07-20 18:53:29.867247] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:19.833 [2024-07-20 18:53:29.867521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.833 [2024-07-20 18:53:29.899897] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:19.833 malloc0 00:23:19.833 18:53:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.833 18:53:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1433515 00:23:19.833 18:53:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.833 18:53:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1433515 /var/tmp/bdevperf.sock 00:23:19.833 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1433515 ']' 00:23:19.833 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.833 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:19.833 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.833 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:19.833 18:53:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:19.833 [2024-07-20 18:53:30.000213] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:19.833 [2024-07-20 18:53:30.000288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433515 ] 00:23:19.833 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.833 [2024-07-20 18:53:30.071046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.091 [2024-07-20 18:53:30.161663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.657 18:53:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:20.657 18:53:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:20.657 18:53:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:20.915 [2024-07-20 18:53:31.171588] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.915 [2024-07-20 18:53:31.171720] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:21.172 TLSTESTn1 00:23:21.172 18:53:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:21.172 Running I/O for 10 seconds... 00:23:33.361 00:23:33.361 Latency(us) 00:23:33.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.361 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:33.361 Verification LBA range: start 0x0 length 0x2000 00:23:33.361 TLSTESTn1 : 10.14 759.39 2.97 0.00 0.00 167686.48 7961.41 214375.54 00:23:33.361 =================================================================================================================== 00:23:33.361 Total : 759.39 2.97 0.00 0.00 167686.48 7961.41 214375.54 00:23:33.361 0 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:33.361 nvmf_trace.0 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1433515 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1433515 ']' 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1433515 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1433515 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1433515' 00:23:33.361 killing process with pid 1433515 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1433515 00:23:33.361 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.361 00:23:33.361 Latency(us) 00:23:33.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.361 =================================================================================================================== 00:23:33.361 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.361 [2024-07-20 18:53:41.646408] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1433515 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:33.361 rmmod nvme_tcp 00:23:33.361 rmmod nvme_fabrics 00:23:33.361 rmmod nvme_keyring 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1433479 ']' 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1433479 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1433479 ']' 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1433479 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1433479 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1433479' 00:23:33.361 killing process with pid 1433479 00:23:33.361 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1433479 00:23:33.362 [2024-07-20 18:53:41.922930] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:33.362 18:53:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1433479 00:23:33.362 18:53:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.362 18:53:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.362 18:53:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.362 18:53:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.362 18:53:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.362 18:53:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.362 18:53:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.362 18:53:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.927 18:53:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:33.927 18:53:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:33.927 00:23:33.927 real 0m17.353s 00:23:33.927 user 0m22.444s 00:23:33.927 sys 0m6.206s 00:23:33.927 18:53:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:33.927 18:53:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:33.927 ************************************ 00:23:33.927 END TEST nvmf_fips 00:23:33.927 ************************************ 00:23:33.927 18:53:44 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:33.927 18:53:44 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:33.927 18:53:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:33.927 18:53:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:33.927 18:53:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:33.927 ************************************ 00:23:33.927 START TEST nvmf_fuzz 00:23:33.927 ************************************ 00:23:33.927 18:53:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:34.184 * Looking for test storage... 00:23:34.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.184 18:53:44 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:34.185 18:53:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.082 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.082 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:36.083 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:36.083 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:36.083 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:36.083 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:36.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:23:36.083 00:23:36.083 --- 10.0.0.2 ping statistics --- 00:23:36.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.083 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:23:36.083 00:23:36.083 --- 10.0.0.1 ping statistics --- 00:23:36.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.083 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1436880 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1436880 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 1436880 ']' 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:36.083 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.342 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:36.342 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:23:36.342 18:53:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.342 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.342 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.342 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.342 18:53:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:36.342 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.342 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.600 Malloc0 00:23:36.600 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.600 18:53:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:36.600 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.600 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.600 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.600 18:53:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:36.600 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.600 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.600 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.600 18:53:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.600 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.600 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.600 18:53:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.600 18:53:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:36.600 18:53:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:08.653 Fuzzing completed. Shutting down the fuzz application 00:24:08.653 00:24:08.653 Dumping successful admin opcodes: 00:24:08.653 8, 9, 10, 24, 00:24:08.653 Dumping successful io opcodes: 00:24:08.653 0, 9, 00:24:08.653 NS: 0x200003aeff00 I/O qp, Total commands completed: 454755, total successful commands: 2638, random_seed: 2057866496 00:24:08.653 NS: 0x200003aeff00 admin qp, Total commands completed: 56752, total successful commands: 452, random_seed: 2493140288 00:24:08.653 18:54:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:08.653 Fuzzing completed. Shutting down the fuzz application 00:24:08.653 00:24:08.653 Dumping successful admin opcodes: 00:24:08.653 24, 00:24:08.653 Dumping successful io opcodes: 00:24:08.653 00:24:08.653 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3472658364 00:24:08.654 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3472778048 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:08.654 rmmod nvme_tcp 00:24:08.654 rmmod nvme_fabrics 00:24:08.654 rmmod nvme_keyring 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1436880 ']' 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1436880 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 1436880 ']' 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 1436880 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1436880 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1436880' 00:24:08.654 killing process with pid 1436880 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 1436880 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 1436880 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.654 18:54:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.198 18:54:20 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:11.198 18:54:20 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:11.198 00:24:11.198 real 0m36.721s 00:24:11.198 user 0m50.922s 00:24:11.198 sys 0m15.381s 00:24:11.198 18:54:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:11.198 18:54:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:11.198 ************************************ 00:24:11.198 END TEST nvmf_fuzz 00:24:11.198 ************************************ 00:24:11.198 18:54:20 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:11.198 18:54:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:11.198 18:54:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:11.198 18:54:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:11.198 ************************************ 00:24:11.198 START TEST nvmf_multiconnection 00:24:11.198 ************************************ 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:11.198 * Looking for test storage... 00:24:11.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:11.198 18:54:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.147 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:13.148 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:13.148 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:13.148 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:13.148 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:13.148 18:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:13.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:24:13.148 00:24:13.148 --- 10.0.0.2 ping statistics --- 00:24:13.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.148 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:24:13.148 00:24:13.148 --- 10.0.0.1 ping statistics --- 00:24:13.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.148 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1443095 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1443095 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 1443095 ']' 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:13.148 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.148 [2024-07-20 18:54:23.193600] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:13.148 [2024-07-20 18:54:23.193669] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.148 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.148 [2024-07-20 18:54:23.263579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:13.148 [2024-07-20 18:54:23.357764] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.148 [2024-07-20 18:54:23.357841] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.148 [2024-07-20 18:54:23.357868] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.148 [2024-07-20 18:54:23.357881] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.148 [2024-07-20 18:54:23.357893] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.148 [2024-07-20 18:54:23.357961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.148 [2024-07-20 18:54:23.358053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.148 [2024-07-20 18:54:23.358132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.148 [2024-07-20 18:54:23.358134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.407 [2024-07-20 18:54:23.516625] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.407 Malloc1 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.407 [2024-07-20 18:54:23.574144] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.407 Malloc2 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.407 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.408 Malloc3 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.408 Malloc4 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.408 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 Malloc5 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 Malloc6 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 Malloc7 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 Malloc8 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 Malloc9 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.667 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.940 Malloc10 00:24:13.940 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.940 18:54:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:13.940 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.940 18:54:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.940 Malloc11 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.940 18:54:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:14.504 18:54:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:14.504 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:14.504 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:14.504 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:14.504 18:54:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:16.399 18:54:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:16.399 18:54:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:16.399 18:54:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:24:16.399 18:54:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:16.399 18:54:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:16.399 18:54:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:16.399 18:54:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.399 18:54:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:17.331 18:54:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:17.331 18:54:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:17.331 18:54:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:17.331 18:54:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:17.331 18:54:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:19.244 18:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:19.244 18:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:19.244 18:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:24:19.244 18:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:19.244 18:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:19.244 18:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:19.244 18:54:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.244 18:54:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:19.810 18:54:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:19.810 18:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:19.810 18:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:19.810 18:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:19.810 18:54:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:21.710 18:54:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:21.710 18:54:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:21.710 18:54:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:24:21.710 18:54:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:21.710 18:54:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:21.710 18:54:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:21.710 18:54:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.710 18:54:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:22.276 18:54:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:22.276 18:54:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:22.276 18:54:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:22.276 18:54:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:22.276 18:54:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:24.810 18:54:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:24.810 18:54:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:24.810 18:54:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:24:24.810 18:54:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:24.810 18:54:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:24.810 18:54:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:24.810 18:54:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:24.810 18:54:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:25.067 18:54:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:25.067 18:54:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:25.067 18:54:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:25.067 18:54:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:25.067 18:54:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:27.595 18:54:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:27.595 18:54:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:27.595 18:54:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:24:27.595 18:54:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:27.595 18:54:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:27.595 18:54:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:27.595 18:54:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.595 18:54:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:27.857 18:54:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:27.857 18:54:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:27.857 18:54:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:27.857 18:54:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:27.857 18:54:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:30.392 18:54:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:30.392 18:54:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:30.392 18:54:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:24:30.392 18:54:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:30.392 18:54:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:30.392 18:54:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:30.392 18:54:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.392 18:54:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:30.649 18:54:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:30.649 18:54:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:30.649 18:54:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:30.649 18:54:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:30.649 18:54:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:32.570 18:54:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:32.570 18:54:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:32.570 18:54:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:24:32.570 18:54:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:32.570 18:54:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:32.570 18:54:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:32.570 18:54:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.570 18:54:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:33.502 18:54:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:33.502 18:54:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:33.502 18:54:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:33.502 18:54:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:33.502 18:54:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:35.399 18:54:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:35.399 18:54:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:35.399 18:54:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:24:35.399 18:54:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:35.399 18:54:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:35.399 18:54:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:35.399 18:54:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.399 18:54:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:36.331 18:54:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:36.331 18:54:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:36.331 18:54:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:36.331 18:54:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:36.331 18:54:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:38.229 18:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:38.229 18:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:38.229 18:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:24:38.229 18:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:38.229 18:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:38.229 18:54:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:38.229 18:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.229 18:54:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:39.162 18:54:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:39.162 18:54:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:39.162 18:54:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:39.162 18:54:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:39.162 18:54:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:41.060 18:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:41.060 18:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:41.060 18:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:24:41.060 18:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:41.060 18:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:41.060 18:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:41.060 18:54:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.060 18:54:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:41.993 18:54:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:41.993 18:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:41.993 18:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:41.993 18:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:41.993 18:54:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:43.889 18:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:43.889 18:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:43.889 18:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:24:43.889 18:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:43.889 18:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:43.889 18:54:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:43.889 18:54:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:43.889 [global] 00:24:43.889 thread=1 00:24:43.889 invalidate=1 00:24:43.889 rw=read 00:24:43.889 time_based=1 00:24:43.889 runtime=10 00:24:43.889 ioengine=libaio 00:24:43.889 direct=1 00:24:43.889 bs=262144 00:24:43.889 iodepth=64 00:24:43.889 norandommap=1 00:24:43.889 numjobs=1 00:24:43.889 00:24:43.889 [job0] 00:24:43.889 filename=/dev/nvme0n1 00:24:43.889 [job1] 00:24:43.889 filename=/dev/nvme10n1 00:24:43.889 [job2] 00:24:43.889 filename=/dev/nvme1n1 00:24:43.889 [job3] 00:24:43.889 filename=/dev/nvme2n1 00:24:43.889 [job4] 00:24:43.889 filename=/dev/nvme3n1 00:24:43.889 [job5] 00:24:43.889 filename=/dev/nvme4n1 00:24:43.889 [job6] 00:24:43.889 filename=/dev/nvme5n1 00:24:43.889 [job7] 00:24:43.889 filename=/dev/nvme6n1 00:24:43.889 [job8] 00:24:43.889 filename=/dev/nvme7n1 00:24:43.889 [job9] 00:24:43.889 filename=/dev/nvme8n1 00:24:43.889 [job10] 00:24:43.889 filename=/dev/nvme9n1 00:24:43.889 Could not set queue depth (nvme0n1) 00:24:43.889 Could not set queue depth (nvme10n1) 00:24:43.889 Could not set queue depth (nvme1n1) 00:24:43.889 Could not set queue depth (nvme2n1) 00:24:43.889 Could not set queue depth (nvme3n1) 00:24:43.889 Could not set queue depth (nvme4n1) 00:24:43.889 Could not set queue depth (nvme5n1) 00:24:43.889 Could not set queue depth (nvme6n1) 00:24:43.889 Could not set queue depth (nvme7n1) 00:24:43.889 Could not set queue depth (nvme8n1) 00:24:43.889 Could not set queue depth (nvme9n1) 00:24:44.147 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.147 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.147 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.147 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.147 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.147 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.147 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.147 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.147 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.147 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.147 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.147 fio-3.35 00:24:44.147 Starting 11 threads 00:24:56.395 00:24:56.395 job0: (groupid=0, jobs=1): err= 0: pid=1447207: Sat Jul 20 18:55:04 2024 00:24:56.395 read: IOPS=446, BW=112MiB/s (117MB/s)(1125MiB/10080msec) 00:24:56.395 slat (usec): min=10, max=514537, avg=1119.60, stdev=11312.20 00:24:56.395 clat (msec): min=8, max=1315, avg=142.12, stdev=114.04 00:24:56.395 lat (msec): min=8, max=1315, avg=143.24, stdev=115.65 00:24:56.395 clat percentiles (msec): 00:24:56.395 | 1.00th=[ 19], 5.00th=[ 48], 10.00th=[ 68], 20.00th=[ 83], 00:24:56.395 | 30.00th=[ 93], 40.00th=[ 103], 50.00th=[ 115], 60.00th=[ 134], 00:24:56.395 | 70.00th=[ 150], 80.00th=[ 171], 90.00th=[ 207], 95.00th=[ 334], 00:24:56.395 | 99.00th=[ 735], 99.50th=[ 810], 99.90th=[ 869], 99.95th=[ 869], 00:24:56.395 | 99.99th=[ 1318] 00:24:56.395 bw ( KiB/s): min=49152, max=163328, per=8.18%, avg=119566.95, stdev=34257.91, samples=19 00:24:56.395 iops : min= 192, max= 638, avg=467.00, stdev=133.80, samples=19 00:24:56.395 lat (msec) : 10=0.07%, 20=1.31%, 50=3.98%, 100=32.35%, 250=56.50% 00:24:56.395 lat (msec) : 500=3.02%, 750=2.07%, 1000=0.69%, 2000=0.02% 00:24:56.395 cpu : usr=0.17%, sys=1.32%, ctx=1482, majf=0, minf=4097 00:24:56.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:56.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.395 issued rwts: total=4501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.395 job1: (groupid=0, jobs=1): err= 0: pid=1447208: Sat Jul 20 18:55:04 2024 00:24:56.395 read: IOPS=624, BW=156MiB/s (164MB/s)(1574MiB/10086msec) 00:24:56.396 slat (usec): min=9, max=155080, avg=1271.57, stdev=5322.08 00:24:56.396 clat (msec): min=9, max=481, avg=101.19, stdev=67.12 00:24:56.396 lat (msec): min=9, max=481, avg=102.46, stdev=67.66 00:24:56.396 clat percentiles (msec): 00:24:56.396 | 1.00th=[ 24], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 53], 00:24:56.396 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 77], 60.00th=[ 91], 00:24:56.396 | 70.00th=[ 118], 80.00th=[ 146], 90.00th=[ 186], 95.00th=[ 234], 00:24:56.396 | 99.00th=[ 326], 99.50th=[ 451], 99.90th=[ 472], 99.95th=[ 481], 00:24:56.396 | 99.99th=[ 481] 00:24:56.396 bw ( KiB/s): min=47616, max=312320, per=10.92%, avg=159509.85, stdev=80233.26, samples=20 00:24:56.396 iops : min= 186, max= 1220, avg=623.05, stdev=313.42, samples=20 00:24:56.396 lat (msec) : 10=0.03%, 20=0.70%, 50=14.27%, 100=48.58%, 250=32.68% 00:24:56.396 lat (msec) : 500=3.75% 00:24:56.396 cpu : usr=0.29%, sys=2.01%, ctx=1461, majf=0, minf=4097 00:24:56.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:56.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.396 issued rwts: total=6295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.396 job2: (groupid=0, jobs=1): err= 0: pid=1447209: Sat Jul 20 18:55:04 2024 00:24:56.396 read: IOPS=704, BW=176MiB/s (185MB/s)(1777MiB/10089msec) 00:24:56.396 slat (usec): min=9, max=152620, avg=1321.23, stdev=4837.20 00:24:56.396 clat (msec): min=12, max=314, avg=89.48, stdev=37.79 00:24:56.396 lat (msec): min=12, max=314, avg=90.80, stdev=38.28 00:24:56.396 clat percentiles (msec): 00:24:56.396 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 60], 00:24:56.396 | 30.00th=[ 66], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 91], 00:24:56.396 | 70.00th=[ 100], 80.00th=[ 112], 90.00th=[ 136], 95.00th=[ 174], 00:24:56.396 | 99.00th=[ 213], 99.50th=[ 245], 99.90th=[ 262], 99.95th=[ 268], 00:24:56.396 | 99.99th=[ 317] 00:24:56.396 bw ( KiB/s): min=82432, max=282624, per=12.34%, avg=180266.15, stdev=50208.24, samples=20 00:24:56.396 iops : min= 322, max= 1104, avg=704.10, stdev=196.08, samples=20 00:24:56.396 lat (msec) : 20=0.14%, 50=8.54%, 100=61.79%, 250=29.10%, 500=0.42% 00:24:56.396 cpu : usr=0.56%, sys=2.40%, ctx=1611, majf=0, minf=4097 00:24:56.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:56.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.396 issued rwts: total=7106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.396 job3: (groupid=0, jobs=1): err= 0: pid=1447211: Sat Jul 20 18:55:04 2024 00:24:56.396 read: IOPS=397, BW=99.5MiB/s (104MB/s)(1006MiB/10114msec) 00:24:56.396 slat (usec): min=9, max=302732, avg=2002.62, stdev=10851.30 00:24:56.396 clat (msec): min=5, max=1047, avg=158.74, stdev=141.06 00:24:56.396 lat (msec): min=5, max=1047, avg=160.74, stdev=143.17 00:24:56.396 clat percentiles (msec): 00:24:56.396 | 1.00th=[ 10], 5.00th=[ 18], 10.00th=[ 56], 20.00th=[ 74], 00:24:56.396 | 30.00th=[ 86], 40.00th=[ 104], 50.00th=[ 128], 60.00th=[ 144], 00:24:56.396 | 70.00th=[ 176], 80.00th=[ 209], 90.00th=[ 296], 95.00th=[ 355], 00:24:56.396 | 99.00th=[ 894], 99.50th=[ 936], 99.90th=[ 961], 99.95th=[ 969], 00:24:56.396 | 99.99th=[ 1045] 00:24:56.396 bw ( KiB/s): min=16896, max=197632, per=6.94%, avg=101377.70, stdev=51759.36, samples=20 00:24:56.396 iops : min= 66, max= 772, avg=395.95, stdev=202.15, samples=20 00:24:56.396 lat (msec) : 10=2.01%, 20=3.73%, 50=3.01%, 100=28.38%, 250=48.71% 00:24:56.396 lat (msec) : 500=10.79%, 750=1.96%, 1000=1.37%, 2000=0.05% 00:24:56.396 cpu : usr=0.25%, sys=1.11%, ctx=1011, majf=0, minf=4097 00:24:56.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:56.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.396 issued rwts: total=4024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.396 job4: (groupid=0, jobs=1): err= 0: pid=1447212: Sat Jul 20 18:55:04 2024 00:24:56.396 read: IOPS=635, BW=159MiB/s (166MB/s)(1599MiB/10069msec) 00:24:56.396 slat (usec): min=9, max=85486, avg=863.42, stdev=4407.13 00:24:56.396 clat (msec): min=2, max=656, avg=99.86, stdev=73.79 00:24:56.396 lat (msec): min=2, max=656, avg=100.72, stdev=74.26 00:24:56.396 clat percentiles (msec): 00:24:56.396 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 34], 20.00th=[ 47], 00:24:56.396 | 30.00th=[ 58], 40.00th=[ 67], 50.00th=[ 83], 60.00th=[ 104], 00:24:56.396 | 70.00th=[ 124], 80.00th=[ 144], 90.00th=[ 171], 95.00th=[ 197], 00:24:56.396 | 99.00th=[ 435], 99.50th=[ 485], 99.90th=[ 634], 99.95th=[ 642], 00:24:56.396 | 99.99th=[ 659] 00:24:56.396 bw ( KiB/s): min=93696, max=314368, per=11.09%, avg=162035.75, stdev=58847.44, samples=20 00:24:56.396 iops : min= 366, max= 1228, avg=632.90, stdev=229.87, samples=20 00:24:56.396 lat (msec) : 4=0.36%, 10=1.25%, 20=1.95%, 50=18.58%, 100=36.27% 00:24:56.396 lat (msec) : 250=38.57%, 500=2.63%, 750=0.39% 00:24:56.396 cpu : usr=0.33%, sys=1.48%, ctx=2017, majf=0, minf=4097 00:24:56.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:56.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.396 issued rwts: total=6394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.396 job5: (groupid=0, jobs=1): err= 0: pid=1447213: Sat Jul 20 18:55:04 2024 00:24:56.396 read: IOPS=735, BW=184MiB/s (193MB/s)(1849MiB/10056msec) 00:24:56.396 slat (usec): min=9, max=91928, avg=1217.89, stdev=3482.81 00:24:56.396 clat (msec): min=21, max=364, avg=85.77, stdev=35.42 00:24:56.396 lat (msec): min=21, max=364, avg=86.99, stdev=35.70 00:24:56.396 clat percentiles (msec): 00:24:56.396 | 1.00th=[ 43], 5.00th=[ 50], 10.00th=[ 54], 20.00th=[ 60], 00:24:56.396 | 30.00th=[ 67], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 86], 00:24:56.396 | 70.00th=[ 94], 80.00th=[ 103], 90.00th=[ 118], 95.00th=[ 148], 00:24:56.396 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 338], 99.95th=[ 338], 00:24:56.396 | 99.99th=[ 363] 00:24:56.396 bw ( KiB/s): min=89600, max=298411, per=12.84%, avg=187633.20, stdev=49533.52, samples=20 00:24:56.396 iops : min= 350, max= 1165, avg=732.85, stdev=193.47, samples=20 00:24:56.396 lat (msec) : 50=5.80%, 100=71.63%, 250=22.13%, 500=0.45% 00:24:56.396 cpu : usr=0.50%, sys=2.49%, ctx=1631, majf=0, minf=4097 00:24:56.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:56.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.396 issued rwts: total=7394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.396 job6: (groupid=0, jobs=1): err= 0: pid=1447214: Sat Jul 20 18:55:04 2024 00:24:56.396 read: IOPS=335, BW=83.9MiB/s (88.0MB/s)(844MiB/10061msec) 00:24:56.396 slat (usec): min=10, max=312496, avg=1942.48, stdev=10485.96 00:24:56.396 clat (msec): min=15, max=1054, avg=188.66, stdev=145.33 00:24:56.396 lat (msec): min=16, max=1054, avg=190.60, stdev=146.37 00:24:56.396 clat percentiles (msec): 00:24:56.396 | 1.00th=[ 52], 5.00th=[ 75], 10.00th=[ 88], 20.00th=[ 102], 00:24:56.396 | 30.00th=[ 114], 40.00th=[ 130], 50.00th=[ 148], 60.00th=[ 169], 00:24:56.396 | 70.00th=[ 190], 80.00th=[ 224], 90.00th=[ 347], 95.00th=[ 468], 00:24:56.396 | 99.00th=[ 927], 99.50th=[ 986], 99.90th=[ 1003], 99.95th=[ 1053], 00:24:56.396 | 99.99th=[ 1053] 00:24:56.396 bw ( KiB/s): min= 5632, max=153088, per=5.80%, avg=84807.70, stdev=41408.48, samples=20 00:24:56.396 iops : min= 22, max= 598, avg=331.20, stdev=161.79, samples=20 00:24:56.396 lat (msec) : 20=0.03%, 50=0.59%, 100=17.51%, 250=65.58%, 500=11.61% 00:24:56.396 lat (msec) : 750=2.96%, 1000=1.63%, 2000=0.09% 00:24:56.396 cpu : usr=0.19%, sys=1.00%, ctx=965, majf=0, minf=4097 00:24:56.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:24:56.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.396 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.396 job7: (groupid=0, jobs=1): err= 0: pid=1447215: Sat Jul 20 18:55:04 2024 00:24:56.396 read: IOPS=438, BW=110MiB/s (115MB/s)(1108MiB/10094msec) 00:24:56.396 slat (usec): min=9, max=506654, avg=1449.90, stdev=11133.16 00:24:56.396 clat (msec): min=2, max=610, avg=144.28, stdev=102.15 00:24:56.396 lat (msec): min=2, max=851, avg=145.73, stdev=103.38 00:24:56.396 clat percentiles (msec): 00:24:56.396 | 1.00th=[ 10], 5.00th=[ 34], 10.00th=[ 68], 20.00th=[ 84], 00:24:56.396 | 30.00th=[ 93], 40.00th=[ 104], 50.00th=[ 113], 60.00th=[ 131], 00:24:56.396 | 70.00th=[ 153], 80.00th=[ 171], 90.00th=[ 305], 95.00th=[ 368], 00:24:56.396 | 99.00th=[ 600], 99.50th=[ 600], 99.90th=[ 609], 99.95th=[ 609], 00:24:56.396 | 99.99th=[ 609] 00:24:56.396 bw ( KiB/s): min=36864, max=174592, per=7.65%, avg=111752.45, stdev=44984.57, samples=20 00:24:56.396 iops : min= 144, max= 682, avg=436.50, stdev=175.77, samples=20 00:24:56.396 lat (msec) : 4=0.05%, 10=1.11%, 20=1.81%, 50=3.97%, 100=29.89% 00:24:56.396 lat (msec) : 250=51.65%, 500=9.95%, 750=1.58% 00:24:56.396 cpu : usr=0.14%, sys=1.39%, ctx=1353, majf=0, minf=4097 00:24:56.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:56.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.396 issued rwts: total=4430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.396 job8: (groupid=0, jobs=1): err= 0: pid=1447216: Sat Jul 20 18:55:04 2024 00:24:56.396 read: IOPS=610, BW=153MiB/s (160MB/s)(1539MiB/10089msec) 00:24:56.396 slat (usec): min=9, max=124400, avg=782.37, stdev=5191.49 00:24:56.396 clat (usec): min=1465, max=603938, avg=104019.90, stdev=88640.44 00:24:56.396 lat (usec): min=1517, max=603969, avg=104802.27, stdev=89460.03 00:24:56.396 clat percentiles (msec): 00:24:56.396 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 24], 20.00th=[ 42], 00:24:56.396 | 30.00th=[ 52], 40.00th=[ 63], 50.00th=[ 77], 60.00th=[ 100], 00:24:56.396 | 70.00th=[ 125], 80.00th=[ 153], 90.00th=[ 199], 95.00th=[ 284], 00:24:56.396 | 99.00th=[ 460], 99.50th=[ 506], 99.90th=[ 592], 99.95th=[ 592], 00:24:56.396 | 99.99th=[ 600] 00:24:56.396 bw ( KiB/s): min=51200, max=311696, per=10.67%, avg=155937.80, stdev=69475.02, samples=20 00:24:56.396 iops : min= 200, max= 1217, avg=609.05, stdev=271.34, samples=20 00:24:56.396 lat (msec) : 2=0.03%, 4=0.80%, 10=2.03%, 20=5.36%, 50=20.68% 00:24:56.396 lat (msec) : 100=31.56%, 250=33.10%, 500=5.91%, 750=0.54% 00:24:56.396 cpu : usr=0.29%, sys=1.71%, ctx=1973, majf=0, minf=4097 00:24:56.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:56.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.396 issued rwts: total=6157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.396 job9: (groupid=0, jobs=1): err= 0: pid=1447217: Sat Jul 20 18:55:04 2024 00:24:56.396 read: IOPS=418, BW=105MiB/s (110MB/s)(1056MiB/10095msec) 00:24:56.396 slat (usec): min=9, max=316138, avg=1939.38, stdev=10150.62 00:24:56.396 clat (msec): min=5, max=1003, avg=150.92, stdev=134.30 00:24:56.396 lat (msec): min=5, max=1003, avg=152.86, stdev=136.44 00:24:56.396 clat percentiles (msec): 00:24:56.396 | 1.00th=[ 12], 5.00th=[ 25], 10.00th=[ 46], 20.00th=[ 79], 00:24:56.396 | 30.00th=[ 101], 40.00th=[ 115], 50.00th=[ 128], 60.00th=[ 144], 00:24:56.396 | 70.00th=[ 159], 80.00th=[ 174], 90.00th=[ 234], 95.00th=[ 380], 00:24:56.396 | 99.00th=[ 902], 99.50th=[ 961], 99.90th=[ 969], 99.95th=[ 978], 00:24:56.396 | 99.99th=[ 1003] 00:24:56.396 bw ( KiB/s): min= 8704, max=210432, per=7.29%, avg=106489.70, stdev=57628.24, samples=20 00:24:56.396 iops : min= 34, max= 822, avg=415.90, stdev=225.16, samples=20 00:24:56.396 lat (msec) : 10=0.69%, 20=2.79%, 50=8.14%, 100=18.28%, 250=61.55% 00:24:56.396 lat (msec) : 500=5.09%, 750=2.06%, 1000=1.37%, 2000=0.02% 00:24:56.396 cpu : usr=0.24%, sys=1.32%, ctx=1125, majf=0, minf=4097 00:24:56.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:56.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.396 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.396 job10: (groupid=0, jobs=1): err= 0: pid=1447225: Sat Jul 20 18:55:04 2024 00:24:56.396 read: IOPS=379, BW=94.9MiB/s (99.5MB/s)(957MiB/10082msec) 00:24:56.396 slat (usec): min=9, max=594192, avg=1613.15, stdev=12178.10 00:24:56.396 clat (msec): min=4, max=1178, avg=166.83, stdev=143.40 00:24:56.396 lat (msec): min=4, max=1178, avg=168.45, stdev=145.52 00:24:56.396 clat percentiles (msec): 00:24:56.396 | 1.00th=[ 18], 5.00th=[ 37], 10.00th=[ 60], 20.00th=[ 88], 00:24:56.396 | 30.00th=[ 106], 40.00th=[ 123], 50.00th=[ 138], 60.00th=[ 150], 00:24:56.396 | 70.00th=[ 169], 80.00th=[ 188], 90.00th=[ 292], 95.00th=[ 401], 00:24:56.396 | 99.00th=[ 894], 99.50th=[ 936], 99.90th=[ 961], 99.95th=[ 961], 00:24:56.396 | 99.99th=[ 1183] 00:24:56.396 bw ( KiB/s): min=13312, max=176640, per=6.59%, avg=96347.45, stdev=51539.62, samples=20 00:24:56.396 iops : min= 52, max= 690, avg=376.30, stdev=201.39, samples=20 00:24:56.396 lat (msec) : 10=0.31%, 20=1.02%, 50=6.48%, 100=19.41%, 250=61.55% 00:24:56.396 lat (msec) : 500=7.05%, 750=2.53%, 1000=1.62%, 2000=0.03% 00:24:56.396 cpu : usr=0.23%, sys=1.21%, ctx=1196, majf=0, minf=3724 00:24:56.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:56.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.396 issued rwts: total=3828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.396 00:24:56.396 Run status group 0 (all jobs): 00:24:56.396 READ: bw=1427MiB/s (1496MB/s), 83.9MiB/s-184MiB/s (88.0MB/s-193MB/s), io=14.1GiB (15.1GB), run=10056-10114msec 00:24:56.396 00:24:56.396 Disk stats (read/write): 00:24:56.396 nvme0n1: ios=8841/0, merge=0/0, ticks=1245129/0, in_queue=1245129, util=97.25% 00:24:56.396 nvme10n1: ios=12425/0, merge=0/0, ticks=1227468/0, in_queue=1227468, util=97.46% 00:24:56.396 nvme1n1: ios=14047/0, merge=0/0, ticks=1228424/0, in_queue=1228424, util=97.73% 00:24:56.396 nvme2n1: ios=7829/0, merge=0/0, ticks=1229842/0, in_queue=1229842, util=97.87% 00:24:56.396 nvme3n1: ios=12599/0, merge=0/0, ticks=1243992/0, in_queue=1243992, util=97.95% 00:24:56.396 nvme4n1: ios=14583/0, merge=0/0, ticks=1229454/0, in_queue=1229454, util=98.26% 00:24:56.396 nvme5n1: ios=6511/0, merge=0/0, ticks=1236839/0, in_queue=1236839, util=98.41% 00:24:56.396 nvme6n1: ios=8689/0, merge=0/0, ticks=1240420/0, in_queue=1240420, util=98.52% 00:24:56.396 nvme7n1: ios=12143/0, merge=0/0, ticks=1241590/0, in_queue=1241590, util=98.91% 00:24:56.396 nvme8n1: ios=8246/0, merge=0/0, ticks=1231220/0, in_queue=1231220, util=99.08% 00:24:56.396 nvme9n1: ios=7374/0, merge=0/0, ticks=1243232/0, in_queue=1243232, util=99.22% 00:24:56.396 18:55:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:56.396 [global] 00:24:56.396 thread=1 00:24:56.396 invalidate=1 00:24:56.396 rw=randwrite 00:24:56.396 time_based=1 00:24:56.396 runtime=10 00:24:56.396 ioengine=libaio 00:24:56.396 direct=1 00:24:56.396 bs=262144 00:24:56.396 iodepth=64 00:24:56.396 norandommap=1 00:24:56.396 numjobs=1 00:24:56.396 00:24:56.396 [job0] 00:24:56.397 filename=/dev/nvme0n1 00:24:56.397 [job1] 00:24:56.397 filename=/dev/nvme10n1 00:24:56.397 [job2] 00:24:56.397 filename=/dev/nvme1n1 00:24:56.397 [job3] 00:24:56.397 filename=/dev/nvme2n1 00:24:56.397 [job4] 00:24:56.397 filename=/dev/nvme3n1 00:24:56.397 [job5] 00:24:56.397 filename=/dev/nvme4n1 00:24:56.397 [job6] 00:24:56.397 filename=/dev/nvme5n1 00:24:56.397 [job7] 00:24:56.397 filename=/dev/nvme6n1 00:24:56.397 [job8] 00:24:56.397 filename=/dev/nvme7n1 00:24:56.397 [job9] 00:24:56.397 filename=/dev/nvme8n1 00:24:56.397 [job10] 00:24:56.397 filename=/dev/nvme9n1 00:24:56.397 Could not set queue depth (nvme0n1) 00:24:56.397 Could not set queue depth (nvme10n1) 00:24:56.397 Could not set queue depth (nvme1n1) 00:24:56.397 Could not set queue depth (nvme2n1) 00:24:56.397 Could not set queue depth (nvme3n1) 00:24:56.397 Could not set queue depth (nvme4n1) 00:24:56.397 Could not set queue depth (nvme5n1) 00:24:56.397 Could not set queue depth (nvme6n1) 00:24:56.397 Could not set queue depth (nvme7n1) 00:24:56.397 Could not set queue depth (nvme8n1) 00:24:56.397 Could not set queue depth (nvme9n1) 00:24:56.397 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.397 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.397 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.397 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.397 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.397 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.397 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.397 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.397 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.397 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.397 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.397 fio-3.35 00:24:56.397 Starting 11 threads 00:25:06.454 00:25:06.454 job0: (groupid=0, jobs=1): err= 0: pid=1448238: Sat Jul 20 18:55:15 2024 00:25:06.454 write: IOPS=451, BW=113MiB/s (118MB/s)(1151MiB/10203msec); 0 zone resets 00:25:06.454 slat (usec): min=24, max=98774, avg=1494.27, stdev=3962.57 00:25:06.454 clat (msec): min=16, max=1719, avg=140.20, stdev=133.57 00:25:06.454 lat (msec): min=16, max=1719, avg=141.70, stdev=133.77 00:25:06.454 clat percentiles (msec): 00:25:06.454 | 1.00th=[ 44], 5.00th=[ 70], 10.00th=[ 85], 20.00th=[ 90], 00:25:06.454 | 30.00th=[ 107], 40.00th=[ 111], 50.00th=[ 115], 60.00th=[ 120], 00:25:06.454 | 70.00th=[ 126], 80.00th=[ 155], 90.00th=[ 207], 95.00th=[ 284], 00:25:06.454 | 99.00th=[ 584], 99.50th=[ 1234], 99.90th=[ 1720], 99.95th=[ 1720], 00:25:06.454 | 99.99th=[ 1720] 00:25:06.454 bw ( KiB/s): min=55296, max=171688, per=16.51%, avg=116195.50, stdev=33326.57, samples=20 00:25:06.454 iops : min= 216, max= 670, avg=453.75, stdev=130.17, samples=20 00:25:06.454 lat (msec) : 20=0.07%, 50=1.48%, 100=23.52%, 250=67.94%, 500=5.84% 00:25:06.454 lat (msec) : 750=0.22%, 1000=0.09%, 2000=0.85% 00:25:06.454 cpu : usr=1.33%, sys=1.58%, ctx=2362, majf=0, minf=1 00:25:06.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:06.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.454 issued rwts: total=0,4604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.454 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.454 job1: (groupid=0, jobs=1): err= 0: pid=1448239: Sat Jul 20 18:55:15 2024 00:25:06.454 write: IOPS=263, BW=65.9MiB/s (69.1MB/s)(666MiB/10106msec); 0 zone resets 00:25:06.454 slat (usec): min=22, max=529987, avg=1796.82, stdev=12899.79 00:25:06.454 clat (msec): min=22, max=4477, avg=240.96, stdev=590.81 00:25:06.454 lat (msec): min=22, max=4477, avg=242.76, stdev=590.81 00:25:06.454 clat percentiles (msec): 00:25:06.454 | 1.00th=[ 33], 5.00th=[ 42], 10.00th=[ 52], 20.00th=[ 85], 00:25:06.454 | 30.00th=[ 107], 40.00th=[ 111], 50.00th=[ 115], 60.00th=[ 120], 00:25:06.454 | 70.00th=[ 138], 80.00th=[ 292], 90.00th=[ 372], 95.00th=[ 414], 00:25:06.454 | 99.00th=[ 4396], 99.50th=[ 4463], 99.90th=[ 4463], 99.95th=[ 4463], 00:25:06.454 | 99.99th=[ 4463] 00:25:06.454 bw ( KiB/s): min= 512, max=147968, per=9.45%, avg=66535.30, stdev=42136.18, samples=20 00:25:06.454 iops : min= 2, max= 578, avg=259.75, stdev=164.69, samples=20 00:25:06.454 lat (msec) : 50=9.43%, 100=15.32%, 250=54.07%, 500=17.27%, 750=1.50% 00:25:06.454 lat (msec) : 1000=0.19%, 2000=0.30%, >=2000=1.92% 00:25:06.454 cpu : usr=0.86%, sys=0.79%, ctx=1514, majf=0, minf=1 00:25:06.454 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:25:06.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.454 issued rwts: total=0,2663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.454 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.454 job2: (groupid=0, jobs=1): err= 0: pid=1448252: Sat Jul 20 18:55:15 2024 00:25:06.454 write: IOPS=426, BW=107MiB/s (112MB/s)(1077MiB/10105msec); 0 zone resets 00:25:06.454 slat (usec): min=17, max=26638, avg=1076.38, stdev=2499.69 00:25:06.454 clat (msec): min=4, max=1876, avg=149.00, stdev=236.40 00:25:06.454 lat (msec): min=4, max=1876, avg=150.08, stdev=236.24 00:25:06.454 clat percentiles (msec): 00:25:06.454 | 1.00th=[ 23], 5.00th=[ 53], 10.00th=[ 73], 20.00th=[ 90], 00:25:06.454 | 30.00th=[ 103], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 113], 00:25:06.454 | 70.00th=[ 116], 80.00th=[ 125], 90.00th=[ 142], 95.00th=[ 236], 00:25:06.454 | 99.00th=[ 1838], 99.50th=[ 1854], 99.90th=[ 1871], 99.95th=[ 1871], 00:25:06.454 | 99.99th=[ 1871] 00:25:06.454 bw ( KiB/s): min=32702, max=165045, per=15.43%, avg=108596.15, stdev=43410.57, samples=20 00:25:06.454 iops : min= 127, max= 644, avg=424.00, stdev=169.51, samples=20 00:25:06.454 lat (msec) : 10=0.44%, 20=0.09%, 50=4.18%, 100=22.13%, 250=68.52% 00:25:06.454 lat (msec) : 500=1.42%, 750=0.35%, 1000=1.28%, 2000=1.60% 00:25:06.454 cpu : usr=1.11%, sys=1.27%, ctx=2340, majf=0, minf=1 00:25:06.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:06.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.454 issued rwts: total=0,4307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.454 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.454 job3: (groupid=0, jobs=1): err= 0: pid=1448253: Sat Jul 20 18:55:15 2024 00:25:06.454 write: IOPS=269, BW=67.4MiB/s (70.7MB/s)(687MiB/10190msec); 0 zone resets 00:25:06.454 slat (usec): min=25, max=1670.5k, avg=3288.30, stdev=32294.16 00:25:06.454 clat (msec): min=19, max=1959, avg=233.80, stdev=261.39 00:25:06.454 lat (msec): min=19, max=1960, avg=237.08, stdev=263.21 00:25:06.454 clat percentiles (msec): 00:25:06.454 | 1.00th=[ 45], 5.00th=[ 61], 10.00th=[ 108], 20.00th=[ 142], 00:25:06.454 | 30.00th=[ 169], 40.00th=[ 180], 50.00th=[ 192], 60.00th=[ 205], 00:25:06.454 | 70.00th=[ 243], 80.00th=[ 264], 90.00th=[ 284], 95.00th=[ 300], 00:25:06.454 | 99.00th=[ 1888], 99.50th=[ 1905], 99.90th=[ 1955], 99.95th=[ 1955], 00:25:06.454 | 99.99th=[ 1955] 00:25:06.454 bw ( KiB/s): min=21504, max=131584, per=10.85%, avg=76382.11, stdev=27869.24, samples=18 00:25:06.454 iops : min= 84, max= 514, avg=298.22, stdev=108.80, samples=18 00:25:06.454 lat (msec) : 20=0.15%, 50=3.64%, 100=3.42%, 250=65.81%, 500=24.70% 00:25:06.454 lat (msec) : 2000=2.29% 00:25:06.454 cpu : usr=0.80%, sys=0.84%, ctx=1048, majf=0, minf=1 00:25:06.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:25:06.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.454 issued rwts: total=0,2749,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.454 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.454 job4: (groupid=0, jobs=1): err= 0: pid=1448254: Sat Jul 20 18:55:15 2024 00:25:06.454 write: IOPS=286, BW=71.5MiB/s (75.0MB/s)(723MiB/10115msec); 0 zone resets 00:25:06.454 slat (usec): min=23, max=3015.4k, avg=3092.44, stdev=57325.55 00:25:06.454 clat (msec): min=24, max=3549, avg=220.58, stdev=482.91 00:25:06.454 lat (msec): min=25, max=3549, avg=223.67, stdev=486.35 00:25:06.454 clat percentiles (msec): 00:25:06.454 | 1.00th=[ 61], 5.00th=[ 103], 10.00th=[ 110], 20.00th=[ 115], 00:25:06.454 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 127], 00:25:06.454 | 70.00th=[ 130], 80.00th=[ 142], 90.00th=[ 224], 95.00th=[ 435], 00:25:06.454 | 99.00th=[ 3373], 99.50th=[ 3373], 99.90th=[ 3540], 99.95th=[ 3540], 00:25:06.454 | 99.99th=[ 3540] 00:25:06.454 bw ( KiB/s): min= 8192, max=139776, per=13.72%, avg=96559.80, stdev=49667.13, samples=15 00:25:06.454 iops : min= 32, max= 546, avg=377.13, stdev=194.04, samples=15 00:25:06.454 lat (msec) : 50=0.45%, 100=3.87%, 250=86.38%, 500=4.74%, 750=2.32% 00:25:06.454 lat (msec) : 1000=0.07%, >=2000=2.18% 00:25:06.454 cpu : usr=0.75%, sys=0.85%, ctx=923, majf=0, minf=1 00:25:06.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:25:06.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.454 issued rwts: total=0,2893,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.454 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.454 job5: (groupid=0, jobs=1): err= 0: pid=1448255: Sat Jul 20 18:55:15 2024 00:25:06.454 write: IOPS=59, BW=15.0MiB/s (15.7MB/s)(154MiB/10269msec); 0 zone resets 00:25:06.454 slat (usec): min=18, max=1451.7k, avg=9859.15, stdev=79225.64 00:25:06.454 clat (msec): min=9, max=4688, avg=1058.21, stdev=1156.88 00:25:06.454 lat (msec): min=10, max=4688, avg=1068.07, stdev=1162.72 00:25:06.454 clat percentiles (msec): 00:25:06.454 | 1.00th=[ 16], 5.00th=[ 53], 10.00th=[ 83], 20.00th=[ 205], 00:25:06.454 | 30.00th=[ 239], 40.00th=[ 284], 50.00th=[ 321], 60.00th=[ 751], 00:25:06.454 | 70.00th=[ 1603], 80.00th=[ 2072], 90.00th=[ 2836], 95.00th=[ 3473], 00:25:06.454 | 99.00th=[ 4178], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:25:06.454 | 99.99th=[ 4665] 00:25:06.454 bw ( KiB/s): min= 1536, max=62976, per=3.08%, avg=21687.00, stdev=22318.09, samples=13 00:25:06.454 iops : min= 6, max= 246, avg=84.46, stdev=87.19, samples=13 00:25:06.454 lat (msec) : 10=0.16%, 20=1.63%, 50=2.60%, 100=7.15%, 250=19.02% 00:25:06.454 lat (msec) : 500=22.28%, 750=6.67%, 1000=4.07%, 2000=12.20%, >=2000=24.23% 00:25:06.454 cpu : usr=0.10%, sys=0.25%, ctx=342, majf=0, minf=1 00:25:06.454 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:25:06.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.454 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:25:06.454 issued rwts: total=0,615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.454 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.454 job6: (groupid=0, jobs=1): err= 0: pid=1448256: Sat Jul 20 18:55:15 2024 00:25:06.454 write: IOPS=134, BW=33.5MiB/s (35.1MB/s)(345MiB/10290msec); 0 zone resets 00:25:06.454 slat (usec): min=22, max=3324.1k, avg=6411.60, stdev=93508.91 00:25:06.454 clat (msec): min=20, max=4830, avg=470.94, stdev=930.89 00:25:06.454 lat (msec): min=20, max=4830, avg=477.35, stdev=937.81 00:25:06.454 clat percentiles (msec): 00:25:06.454 | 1.00th=[ 36], 5.00th=[ 52], 10.00th=[ 80], 20.00th=[ 116], 00:25:06.454 | 30.00th=[ 132], 40.00th=[ 165], 50.00th=[ 192], 60.00th=[ 271], 00:25:06.454 | 70.00th=[ 292], 80.00th=[ 393], 90.00th=[ 827], 95.00th=[ 1351], 00:25:06.454 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:25:06.454 | 99.99th=[ 4799] 00:25:06.454 bw ( KiB/s): min= 2048, max=113437, per=6.83%, avg=48061.00, stdev=40106.47, samples=14 00:25:06.454 iops : min= 8, max= 443, avg=187.57, stdev=156.73, samples=14 00:25:06.454 lat (msec) : 50=3.84%, 100=10.22%, 250=40.83%, 500=27.56%, 750=6.74% 00:25:06.455 lat (msec) : 1000=2.54%, 2000=3.70%, >=2000=4.57% 00:25:06.455 cpu : usr=0.49%, sys=0.39%, ctx=841, majf=0, minf=1 00:25:06.455 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:25:06.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.455 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.455 issued rwts: total=0,1379,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.455 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.455 job7: (groupid=0, jobs=1): err= 0: pid=1448257: Sat Jul 20 18:55:15 2024 00:25:06.455 write: IOPS=227, BW=56.9MiB/s (59.6MB/s)(585MiB/10291msec); 0 zone resets 00:25:06.455 slat (usec): min=24, max=3253.2k, avg=4152.77, stdev=69206.45 00:25:06.455 clat (msec): min=33, max=4829, avg=277.03, stdev=718.74 00:25:06.455 lat (msec): min=33, max=4829, avg=281.18, stdev=723.79 00:25:06.455 clat percentiles (msec): 00:25:06.455 | 1.00th=[ 72], 5.00th=[ 109], 10.00th=[ 117], 20.00th=[ 123], 00:25:06.455 | 30.00th=[ 127], 40.00th=[ 132], 50.00th=[ 142], 60.00th=[ 150], 00:25:06.455 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 205], 95.00th=[ 435], 00:25:06.455 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:25:06.455 | 99.99th=[ 4799] 00:25:06.455 bw ( KiB/s): min= 2048, max=130048, per=11.83%, avg=83245.50, stdev=47647.65, samples=14 00:25:06.455 iops : min= 8, max= 508, avg=325.00, stdev=186.35, samples=14 00:25:06.455 lat (msec) : 50=0.43%, 100=3.12%, 250=89.24%, 500=2.52%, 750=1.50% 00:25:06.455 lat (msec) : 1000=0.17%, 2000=0.34%, >=2000=2.69% 00:25:06.455 cpu : usr=0.61%, sys=0.75%, ctx=734, majf=0, minf=1 00:25:06.455 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:25:06.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.455 issued rwts: total=0,2341,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.455 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.455 job8: (groupid=0, jobs=1): err= 0: pid=1448259: Sat Jul 20 18:55:15 2024 00:25:06.455 write: IOPS=266, BW=66.7MiB/s (69.9MB/s)(687MiB/10295msec); 0 zone resets 00:25:06.455 slat (usec): min=23, max=2454.0k, avg=2918.23, stdev=47047.18 00:25:06.455 clat (msec): min=20, max=3565, avg=236.79, stdev=472.98 00:25:06.455 lat (msec): min=20, max=3565, avg=239.70, stdev=475.01 00:25:06.455 clat percentiles (msec): 00:25:06.455 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 95], 20.00th=[ 107], 00:25:06.455 | 30.00th=[ 114], 40.00th=[ 128], 50.00th=[ 153], 60.00th=[ 182], 00:25:06.455 | 70.00th=[ 201], 80.00th=[ 215], 90.00th=[ 313], 95.00th=[ 368], 00:25:06.455 | 99.00th=[ 3473], 99.50th=[ 3540], 99.90th=[ 3574], 99.95th=[ 3574], 00:25:06.455 | 99.99th=[ 3574] 00:25:06.455 bw ( KiB/s): min=26112, max=143360, per=12.19%, avg=85812.25, stdev=31499.23, samples=16 00:25:06.455 iops : min= 102, max= 560, avg=335.06, stdev=122.97, samples=16 00:25:06.455 lat (msec) : 50=7.06%, 100=5.68%, 250=75.17%, 500=8.85%, 750=0.95% 00:25:06.455 lat (msec) : >=2000=2.29% 00:25:06.455 cpu : usr=0.74%, sys=0.86%, ctx=1275, majf=0, minf=1 00:25:06.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:25:06.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.455 issued rwts: total=0,2747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.455 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.455 job9: (groupid=0, jobs=1): err= 0: pid=1448263: Sat Jul 20 18:55:15 2024 00:25:06.455 write: IOPS=268, BW=67.1MiB/s (70.3MB/s)(680MiB/10129msec); 0 zone resets 00:25:06.455 slat (usec): min=20, max=1470.2k, avg=3187.27, stdev=37719.69 00:25:06.455 clat (msec): min=21, max=2952, avg=235.20, stdev=399.33 00:25:06.455 lat (msec): min=21, max=2952, avg=238.39, stdev=402.99 00:25:06.455 clat percentiles (msec): 00:25:06.455 | 1.00th=[ 44], 5.00th=[ 63], 10.00th=[ 89], 20.00th=[ 112], 00:25:06.455 | 30.00th=[ 138], 40.00th=[ 142], 50.00th=[ 146], 60.00th=[ 150], 00:25:06.455 | 70.00th=[ 163], 80.00th=[ 178], 90.00th=[ 257], 95.00th=[ 600], 00:25:06.455 | 99.00th=[ 2333], 99.50th=[ 2769], 99.90th=[ 2937], 99.95th=[ 2937], 00:25:06.455 | 99.99th=[ 2937] 00:25:06.455 bw ( KiB/s): min= 512, max=145117, per=10.72%, avg=75472.33, stdev=45190.86, samples=18 00:25:06.455 iops : min= 2, max= 566, avg=294.67, stdev=176.46, samples=18 00:25:06.455 lat (msec) : 50=2.54%, 100=13.39%, 250=72.88%, 500=5.08%, 750=2.47% 00:25:06.455 lat (msec) : 2000=1.32%, >=2000=2.32% 00:25:06.455 cpu : usr=0.68%, sys=0.83%, ctx=1106, majf=0, minf=1 00:25:06.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:25:06.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.455 issued rwts: total=0,2718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.455 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.455 job10: (groupid=0, jobs=1): err= 0: pid=1448264: Sat Jul 20 18:55:15 2024 00:25:06.455 write: IOPS=124, BW=31.2MiB/s (32.7MB/s)(321MiB/10290msec); 0 zone resets 00:25:06.455 slat (usec): min=24, max=3254.5k, avg=7751.18, stdev=97678.50 00:25:06.455 clat (msec): min=30, max=4840, avg=504.85, stdev=937.69 00:25:06.455 lat (msec): min=30, max=4840, avg=512.60, stdev=944.82 00:25:06.455 clat percentiles (msec): 00:25:06.455 | 1.00th=[ 58], 5.00th=[ 142], 10.00th=[ 169], 20.00th=[ 178], 00:25:06.455 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 203], 60.00th=[ 224], 00:25:06.455 | 70.00th=[ 275], 80.00th=[ 518], 90.00th=[ 1011], 95.00th=[ 1435], 00:25:06.455 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4866], 99.95th=[ 4866], 00:25:06.455 | 99.99th=[ 4866] 00:25:06.455 bw ( KiB/s): min= 4087, max=92160, per=6.34%, avg=44618.43, stdev=32517.94, samples=14 00:25:06.455 iops : min= 15, max= 360, avg=174.00, stdev=127.18, samples=14 00:25:06.455 lat (msec) : 50=0.62%, 100=2.18%, 250=60.05%, 500=16.90%, 750=8.88% 00:25:06.455 lat (msec) : 1000=1.01%, 2000=5.45%, >=2000=4.91% 00:25:06.455 cpu : usr=0.33%, sys=0.33%, ctx=412, majf=0, minf=1 00:25:06.455 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:25:06.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.455 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.455 issued rwts: total=0,1284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.455 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.455 00:25:06.455 Run status group 0 (all jobs): 00:25:06.455 WRITE: bw=687MiB/s (721MB/s), 15.0MiB/s-113MiB/s (15.7MB/s-118MB/s), io=7075MiB (7419MB), run=10105-10295msec 00:25:06.455 00:25:06.455 Disk stats (read/write): 00:25:06.455 nvme0n1: ios=52/9182, merge=0/0, ticks=1480/1203878, in_queue=1205358, util=99.77% 00:25:06.455 nvme10n1: ios=49/5066, merge=0/0, ticks=83/1228069, in_queue=1228152, util=97.24% 00:25:06.455 nvme1n1: ios=39/8337, merge=0/0, ticks=134/1226422, in_queue=1226556, util=97.67% 00:25:06.455 nvme2n1: ios=27/5460, merge=0/0, ticks=75/1238222, in_queue=1238297, util=97.62% 00:25:06.455 nvme3n1: ios=48/5521, merge=0/0, ticks=7141/1140499, in_queue=1147640, util=100.00% 00:25:06.455 nvme4n1: ios=5/1102, merge=0/0, ticks=8/831538, in_queue=831546, util=98.02% 00:25:06.455 nvme5n1: ios=0/2630, merge=0/0, ticks=0/708834, in_queue=708834, util=98.13% 00:25:06.455 nvme6n1: ios=0/4559, merge=0/0, ticks=0/711092, in_queue=711092, util=98.32% 00:25:06.455 nvme7n1: ios=44/5367, merge=0/0, ticks=141/869351, in_queue=869492, util=99.61% 00:25:06.455 nvme8n1: ios=0/5428, merge=0/0, ticks=0/1247850, in_queue=1247850, util=98.97% 00:25:06.455 nvme9n1: ios=0/2447, merge=0/0, ticks=0/717103, in_queue=717103, util=99.10% 00:25:06.455 18:55:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:06.455 18:55:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:06.455 18:55:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.455 18:55:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:06.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:06.455 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:06.455 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:06.455 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:06.456 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.456 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.456 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.456 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.456 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:06.714 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:06.714 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:06.714 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:06.714 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:06.714 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:06.714 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:06.714 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:06.714 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:06.714 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:06.714 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.714 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.714 18:55:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.714 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.714 18:55:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:06.972 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:06.972 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:06.972 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:06.972 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:06.972 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:06.972 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:06.972 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:06.972 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:06.972 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:06.972 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.972 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.972 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.972 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.972 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:07.231 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:07.231 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:07.231 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:07.231 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:07.231 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:07.231 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:07.231 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:07.231 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:07.231 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:07.231 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.231 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.231 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.231 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.231 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:07.489 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:07.489 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:07.489 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:07.747 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.747 18:55:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:07.747 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:08.005 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:08.005 rmmod nvme_tcp 00:25:08.005 rmmod nvme_fabrics 00:25:08.005 rmmod nvme_keyring 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1443095 ']' 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1443095 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 1443095 ']' 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 1443095 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:08.005 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1443095 00:25:08.263 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:08.263 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:08.263 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1443095' 00:25:08.263 killing process with pid 1443095 00:25:08.263 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 1443095 00:25:08.263 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 1443095 00:25:08.828 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:08.828 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:08.828 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:08.828 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.828 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:08.828 18:55:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.828 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.828 18:55:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.727 18:55:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:10.727 00:25:10.727 real 0m59.924s 00:25:10.727 user 3m10.300s 00:25:10.727 sys 0m17.463s 00:25:10.727 18:55:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:10.727 18:55:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:10.727 ************************************ 00:25:10.727 END TEST nvmf_multiconnection 00:25:10.727 ************************************ 00:25:10.727 18:55:20 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:10.727 18:55:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:10.727 18:55:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:10.727 18:55:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.727 ************************************ 00:25:10.727 START TEST nvmf_initiator_timeout 00:25:10.727 ************************************ 00:25:10.727 18:55:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:10.727 * Looking for test storage... 00:25:10.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.727 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:10.728 18:55:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:13.262 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:13.262 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:13.262 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:13.262 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.262 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:13.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:25:13.263 00:25:13.263 --- 10.0.0.2 ping statistics --- 00:25:13.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.263 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:25:13.263 00:25:13.263 --- 10.0.0.1 ping statistics --- 00:25:13.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.263 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1450985 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1450985 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 1450985 ']' 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:13.263 [2024-07-20 18:55:23.243558] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:13.263 [2024-07-20 18:55:23.243645] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.263 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.263 [2024-07-20 18:55:23.307611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:13.263 [2024-07-20 18:55:23.393579] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.263 [2024-07-20 18:55:23.393631] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.263 [2024-07-20 18:55:23.393656] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.263 [2024-07-20 18:55:23.393667] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.263 [2024-07-20 18:55:23.393677] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.263 [2024-07-20 18:55:23.393774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.263 [2024-07-20 18:55:23.393851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.263 [2024-07-20 18:55:23.393906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.263 [2024-07-20 18:55:23.393909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:13.263 Malloc0 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.263 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:13.552 Delay0 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:13.552 [2024-07-20 18:55:23.588138] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:13.552 [2024-07-20 18:55:23.616430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.552 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.553 18:55:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:14.116 18:55:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:14.116 18:55:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:14.116 18:55:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:14.116 18:55:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:14.116 18:55:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:25:16.010 18:55:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:16.010 18:55:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:16.010 18:55:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:25:16.010 18:55:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:16.010 18:55:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:16.010 18:55:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:25:16.010 18:55:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1451410 00:25:16.010 18:55:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:16.010 18:55:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:16.010 [global] 00:25:16.010 thread=1 00:25:16.010 invalidate=1 00:25:16.010 rw=write 00:25:16.010 time_based=1 00:25:16.010 runtime=60 00:25:16.010 ioengine=libaio 00:25:16.010 direct=1 00:25:16.010 bs=4096 00:25:16.010 iodepth=1 00:25:16.010 norandommap=0 00:25:16.010 numjobs=1 00:25:16.010 00:25:16.010 verify_dump=1 00:25:16.010 verify_backlog=512 00:25:16.010 verify_state_save=0 00:25:16.010 do_verify=1 00:25:16.010 verify=crc32c-intel 00:25:16.010 [job0] 00:25:16.010 filename=/dev/nvme0n1 00:25:16.010 Could not set queue depth (nvme0n1) 00:25:16.266 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:16.266 fio-3.35 00:25:16.266 Starting 1 thread 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:19.545 true 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:19.545 true 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:19.545 true 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:19.545 true 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.545 18:55:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:22.073 true 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:22.073 true 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:22.073 true 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:22.073 true 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:22.073 18:55:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1451410 00:26:18.266 00:26:18.266 job0: (groupid=0, jobs=1): err= 0: pid=1451481: Sat Jul 20 18:56:26 2024 00:26:18.266 read: IOPS=61, BW=247KiB/s (253kB/s)(14.5MiB/60033msec) 00:26:18.266 slat (usec): min=6, max=8832, avg=21.98, stdev=197.98 00:26:18.266 clat (usec): min=474, max=40826k, avg=15666.48, stdev=670125.66 00:26:18.266 lat (usec): min=487, max=40826k, avg=15688.47, stdev=670125.87 00:26:18.266 clat percentiles (usec): 00:26:18.266 | 1.00th=[ 498], 5.00th=[ 510], 10.00th=[ 523], 00:26:18.266 | 20.00th=[ 537], 30.00th=[ 545], 40.00th=[ 562], 00:26:18.266 | 50.00th=[ 619], 60.00th=[ 685], 70.00th=[ 717], 00:26:18.266 | 80.00th=[ 775], 90.00th=[ 1893], 95.00th=[ 41157], 00:26:18.266 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:26:18.266 | 99.95th=[ 42730], 99.99th=[17112761] 00:26:18.266 write: IOPS=68, BW=273KiB/s (279kB/s)(16.0MiB/60033msec); 0 zone resets 00:26:18.266 slat (nsec): min=6543, max=85340, avg=20406.00, stdev=11517.12 00:26:18.266 clat (usec): min=297, max=617, avg=408.71, stdev=49.71 00:26:18.266 lat (usec): min=306, max=659, avg=429.12, stdev=54.42 00:26:18.266 clat percentiles (usec): 00:26:18.266 | 1.00th=[ 314], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 363], 00:26:18.266 | 30.00th=[ 379], 40.00th=[ 396], 50.00th=[ 408], 60.00th=[ 416], 00:26:18.266 | 70.00th=[ 424], 80.00th=[ 445], 90.00th=[ 474], 95.00th=[ 510], 00:26:18.266 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[ 594], 99.95th=[ 594], 00:26:18.266 | 99.99th=[ 619] 00:26:18.266 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=8 00:26:18.266 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=8 00:26:18.266 lat (usec) : 500=50.01%, 750=38.84%, 1000=5.56% 00:26:18.266 lat (msec) : 2=0.83%, 4=0.04%, 50=4.70%, >=2000=0.01% 00:26:18.266 cpu : usr=0.12%, sys=0.28%, ctx=7810, majf=0, minf=2 00:26:18.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:18.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.266 issued rwts: total=3712,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:18.266 00:26:18.266 Run status group 0 (all jobs): 00:26:18.266 READ: bw=247KiB/s (253kB/s), 247KiB/s-247KiB/s (253kB/s-253kB/s), io=14.5MiB (15.2MB), run=60033-60033msec 00:26:18.266 WRITE: bw=273KiB/s (279kB/s), 273KiB/s-273KiB/s (279kB/s-279kB/s), io=16.0MiB (16.8MB), run=60033-60033msec 00:26:18.266 00:26:18.266 Disk stats (read/write): 00:26:18.266 nvme0n1: ios=3807/4096, merge=0/0, ticks=17356/1602, in_queue=18958, util=99.76% 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:18.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:18.266 nvmf hotplug test: fio successful as expected 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:18.266 rmmod nvme_tcp 00:26:18.266 rmmod nvme_fabrics 00:26:18.266 rmmod nvme_keyring 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1450985 ']' 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1450985 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 1450985 ']' 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 1450985 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1450985 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1450985' 00:26:18.266 killing process with pid 1450985 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 1450985 00:26:18.266 18:56:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 1450985 00:26:18.266 18:56:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:18.266 18:56:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:18.266 18:56:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:18.266 18:56:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:18.266 18:56:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:18.266 18:56:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.266 18:56:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:18.266 18:56:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.836 18:56:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:18.836 00:26:18.836 real 1m8.153s 00:26:18.836 user 4m10.805s 00:26:18.836 sys 0m6.369s 00:26:18.836 18:56:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:18.836 18:56:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.836 ************************************ 00:26:18.836 END TEST nvmf_initiator_timeout 00:26:18.836 ************************************ 00:26:18.836 18:56:29 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:18.837 18:56:29 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:18.837 18:56:29 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:18.837 18:56:29 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:18.837 18:56:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.758 18:56:31 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:20.759 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:20.759 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:20.759 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:20.759 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:20.759 18:56:31 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:20.759 18:56:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:20.759 18:56:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:20.759 18:56:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:20.759 ************************************ 00:26:20.759 START TEST nvmf_perf_adq 00:26:20.759 ************************************ 00:26:20.759 18:56:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:21.016 * Looking for test storage... 00:26:21.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.016 18:56:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.017 18:56:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:21.017 18:56:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.017 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:21.017 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:21.017 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:21.017 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.017 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.017 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.017 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:21.017 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:21.017 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:21.017 18:56:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:21.017 18:56:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:21.017 18:56:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.915 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:22.916 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:22.916 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:22.916 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:22.916 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:22.916 18:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:23.481 18:56:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:24.853 18:56:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:30.114 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.114 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:30.115 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:30.115 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:30.115 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:30.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:26:30.115 00:26:30.115 --- 10.0.0.2 ping statistics --- 00:26:30.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.115 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:30.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:26:30.115 00:26:30.115 --- 10.0.0.1 ping statistics --- 00:26:30.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.115 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1462873 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1462873 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1462873 ']' 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:30.115 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:30.115 [2024-07-20 18:56:40.304997] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:30.115 [2024-07-20 18:56:40.305076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.115 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.115 [2024-07-20 18:56:40.375190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:30.374 [2024-07-20 18:56:40.466349] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.374 [2024-07-20 18:56:40.466408] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.374 [2024-07-20 18:56:40.466435] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.374 [2024-07-20 18:56:40.466449] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.374 [2024-07-20 18:56:40.466460] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.374 [2024-07-20 18:56:40.466543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.374 [2024-07-20 18:56:40.466612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.374 [2024-07-20 18:56:40.466703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:30.374 [2024-07-20 18:56:40.466705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:30.374 [2024-07-20 18:56:40.688429] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.374 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:30.633 Malloc1 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:30.633 [2024-07-20 18:56:40.739922] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1463016 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:30.633 18:56:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:30.633 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.533 18:56:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:32.533 18:56:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.533 18:56:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:32.533 18:56:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.533 18:56:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:32.533 "tick_rate": 2700000000, 00:26:32.533 "poll_groups": [ 00:26:32.533 { 00:26:32.533 "name": "nvmf_tgt_poll_group_000", 00:26:32.533 "admin_qpairs": 1, 00:26:32.533 "io_qpairs": 1, 00:26:32.533 "current_admin_qpairs": 1, 00:26:32.533 "current_io_qpairs": 1, 00:26:32.533 "pending_bdev_io": 0, 00:26:32.533 "completed_nvme_io": 15913, 00:26:32.533 "transports": [ 00:26:32.533 { 00:26:32.533 "trtype": "TCP" 00:26:32.533 } 00:26:32.533 ] 00:26:32.533 }, 00:26:32.533 { 00:26:32.533 "name": "nvmf_tgt_poll_group_001", 00:26:32.533 "admin_qpairs": 0, 00:26:32.533 "io_qpairs": 1, 00:26:32.533 "current_admin_qpairs": 0, 00:26:32.533 "current_io_qpairs": 1, 00:26:32.533 "pending_bdev_io": 0, 00:26:32.533 "completed_nvme_io": 18895, 00:26:32.533 "transports": [ 00:26:32.533 { 00:26:32.533 "trtype": "TCP" 00:26:32.533 } 00:26:32.533 ] 00:26:32.533 }, 00:26:32.533 { 00:26:32.533 "name": "nvmf_tgt_poll_group_002", 00:26:32.533 "admin_qpairs": 0, 00:26:32.533 "io_qpairs": 1, 00:26:32.533 "current_admin_qpairs": 0, 00:26:32.533 "current_io_qpairs": 1, 00:26:32.533 "pending_bdev_io": 0, 00:26:32.533 "completed_nvme_io": 20613, 00:26:32.533 "transports": [ 00:26:32.533 { 00:26:32.533 "trtype": "TCP" 00:26:32.533 } 00:26:32.533 ] 00:26:32.533 }, 00:26:32.533 { 00:26:32.533 "name": "nvmf_tgt_poll_group_003", 00:26:32.533 "admin_qpairs": 0, 00:26:32.533 "io_qpairs": 1, 00:26:32.533 "current_admin_qpairs": 0, 00:26:32.533 "current_io_qpairs": 1, 00:26:32.533 "pending_bdev_io": 0, 00:26:32.533 "completed_nvme_io": 20572, 00:26:32.533 "transports": [ 00:26:32.533 { 00:26:32.533 "trtype": "TCP" 00:26:32.533 } 00:26:32.533 ] 00:26:32.533 } 00:26:32.533 ] 00:26:32.533 }' 00:26:32.533 18:56:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:32.533 18:56:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:32.533 18:56:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:32.533 18:56:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:32.533 18:56:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1463016 00:26:40.636 Initializing NVMe Controllers 00:26:40.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:40.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:40.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:40.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:40.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:40.636 Initialization complete. Launching workers. 00:26:40.636 ======================================================== 00:26:40.636 Latency(us) 00:26:40.636 Device Information : IOPS MiB/s Average min max 00:26:40.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10841.60 42.35 5902.90 2370.85 9607.74 00:26:40.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9810.30 38.32 6523.60 3505.28 9489.66 00:26:40.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10804.10 42.20 5924.20 1951.01 9111.60 00:26:40.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8377.20 32.72 7644.07 2907.04 10461.23 00:26:40.636 ======================================================== 00:26:40.636 Total : 39833.19 155.60 6427.72 1951.01 10461.23 00:26:40.636 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:40.636 rmmod nvme_tcp 00:26:40.636 rmmod nvme_fabrics 00:26:40.636 rmmod nvme_keyring 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1462873 ']' 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1462873 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1462873 ']' 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1462873 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:40.636 18:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1462873 00:26:40.901 18:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:40.901 18:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:40.901 18:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1462873' 00:26:40.901 killing process with pid 1462873 00:26:40.901 18:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1462873 00:26:40.901 18:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1462873 00:26:40.901 18:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:40.901 18:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:40.901 18:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:40.901 18:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:40.901 18:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:40.901 18:56:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.901 18:56:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.901 18:56:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.444 18:56:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:43.444 18:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:43.444 18:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:43.702 18:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:45.599 18:56:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:50.867 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:50.867 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:50.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:50.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:50.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:26:50.867 00:26:50.867 --- 10.0.0.2 ping statistics --- 00:26:50.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.867 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:26:50.867 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:26:50.868 00:26:50.868 --- 10.0.0.1 ping statistics --- 00:26:50.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.868 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:50.868 net.core.busy_poll = 1 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:50.868 net.core.busy_read = 1 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1465560 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1465560 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1465560 ']' 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.868 [2024-07-20 18:57:00.728645] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:50.868 [2024-07-20 18:57:00.728735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.868 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.868 [2024-07-20 18:57:00.798214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.868 [2024-07-20 18:57:00.884204] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.868 [2024-07-20 18:57:00.884255] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.868 [2024-07-20 18:57:00.884284] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.868 [2024-07-20 18:57:00.884296] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.868 [2024-07-20 18:57:00.884306] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.868 [2024-07-20 18:57:00.884386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.868 [2024-07-20 18:57:00.884411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.868 [2024-07-20 18:57:00.884466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.868 [2024-07-20 18:57:00.884469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.868 18:57:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.868 [2024-07-20 18:57:01.121325] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.868 Malloc1 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.868 [2024-07-20 18:57:01.172578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1465637 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:50.868 18:57:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:51.125 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.019 18:57:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:53.019 18:57:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.019 18:57:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.019 18:57:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.019 18:57:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:53.019 "tick_rate": 2700000000, 00:26:53.019 "poll_groups": [ 00:26:53.019 { 00:26:53.019 "name": "nvmf_tgt_poll_group_000", 00:26:53.019 "admin_qpairs": 1, 00:26:53.019 "io_qpairs": 2, 00:26:53.019 "current_admin_qpairs": 1, 00:26:53.019 "current_io_qpairs": 2, 00:26:53.019 "pending_bdev_io": 0, 00:26:53.019 "completed_nvme_io": 26151, 00:26:53.019 "transports": [ 00:26:53.019 { 00:26:53.019 "trtype": "TCP" 00:26:53.019 } 00:26:53.019 ] 00:26:53.019 }, 00:26:53.019 { 00:26:53.019 "name": "nvmf_tgt_poll_group_001", 00:26:53.019 "admin_qpairs": 0, 00:26:53.019 "io_qpairs": 2, 00:26:53.019 "current_admin_qpairs": 0, 00:26:53.019 "current_io_qpairs": 2, 00:26:53.019 "pending_bdev_io": 0, 00:26:53.019 "completed_nvme_io": 19534, 00:26:53.019 "transports": [ 00:26:53.019 { 00:26:53.019 "trtype": "TCP" 00:26:53.019 } 00:26:53.019 ] 00:26:53.019 }, 00:26:53.019 { 00:26:53.019 "name": "nvmf_tgt_poll_group_002", 00:26:53.019 "admin_qpairs": 0, 00:26:53.019 "io_qpairs": 0, 00:26:53.019 "current_admin_qpairs": 0, 00:26:53.019 "current_io_qpairs": 0, 00:26:53.019 "pending_bdev_io": 0, 00:26:53.019 "completed_nvme_io": 0, 00:26:53.019 "transports": [ 00:26:53.019 { 00:26:53.019 "trtype": "TCP" 00:26:53.019 } 00:26:53.019 ] 00:26:53.019 }, 00:26:53.019 { 00:26:53.019 "name": "nvmf_tgt_poll_group_003", 00:26:53.019 "admin_qpairs": 0, 00:26:53.019 "io_qpairs": 0, 00:26:53.019 "current_admin_qpairs": 0, 00:26:53.019 "current_io_qpairs": 0, 00:26:53.019 "pending_bdev_io": 0, 00:26:53.019 "completed_nvme_io": 0, 00:26:53.019 "transports": [ 00:26:53.019 { 00:26:53.019 "trtype": "TCP" 00:26:53.019 } 00:26:53.019 ] 00:26:53.019 } 00:26:53.019 ] 00:26:53.019 }' 00:26:53.019 18:57:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:53.019 18:57:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:53.019 18:57:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:53.019 18:57:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:53.019 18:57:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1465637 00:27:01.116 Initializing NVMe Controllers 00:27:01.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:01.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:01.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:01.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:01.116 Initialization complete. Launching workers. 00:27:01.116 ======================================================== 00:27:01.116 Latency(us) 00:27:01.116 Device Information : IOPS MiB/s Average min max 00:27:01.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6625.00 25.88 9672.06 1844.86 53104.22 00:27:01.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7544.20 29.47 8509.69 1721.91 53245.98 00:27:01.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5476.80 21.39 11694.69 2366.05 57978.50 00:27:01.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4687.40 18.31 13662.01 2032.69 59516.80 00:27:01.116 ======================================================== 00:27:01.116 Total : 24333.40 95.05 10535.52 1721.91 59516.80 00:27:01.116 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:01.116 rmmod nvme_tcp 00:27:01.116 rmmod nvme_fabrics 00:27:01.116 rmmod nvme_keyring 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1465560 ']' 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1465560 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1465560 ']' 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1465560 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:01.116 18:57:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1465560 00:27:01.375 18:57:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:01.375 18:57:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:01.375 18:57:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1465560' 00:27:01.375 killing process with pid 1465560 00:27:01.375 18:57:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1465560 00:27:01.375 18:57:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1465560 00:27:01.632 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:01.632 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:01.632 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:01.632 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:01.632 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:01.632 18:57:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.632 18:57:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:01.632 18:57:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.918 18:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:04.919 18:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:04.919 00:27:04.919 real 0m43.686s 00:27:04.919 user 2m27.210s 00:27:04.919 sys 0m13.389s 00:27:04.919 18:57:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:04.919 18:57:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.919 ************************************ 00:27:04.919 END TEST nvmf_perf_adq 00:27:04.919 ************************************ 00:27:04.919 18:57:14 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:04.919 18:57:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:04.919 18:57:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:04.919 18:57:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.919 ************************************ 00:27:04.919 START TEST nvmf_shutdown 00:27:04.919 ************************************ 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:04.919 * Looking for test storage... 00:27:04.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:04.919 ************************************ 00:27:04.919 START TEST nvmf_shutdown_tc1 00:27:04.919 ************************************ 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:04.919 18:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:06.818 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:06.818 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:06.818 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:06.818 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:06.818 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:06.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:27:06.819 00:27:06.819 --- 10.0.0.2 ping statistics --- 00:27:06.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.819 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:27:06.819 00:27:06.819 --- 10.0.0.1 ping statistics --- 00:27:06.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.819 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1469442 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1469442 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1469442 ']' 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:06.819 18:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:06.819 [2024-07-20 18:57:17.022417] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:06.819 [2024-07-20 18:57:17.022500] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.819 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.819 [2024-07-20 18:57:17.089736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.077 [2024-07-20 18:57:17.178740] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.077 [2024-07-20 18:57:17.178800] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.077 [2024-07-20 18:57:17.178815] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.077 [2024-07-20 18:57:17.178826] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.077 [2024-07-20 18:57:17.178836] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.077 [2024-07-20 18:57:17.178936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.077 [2024-07-20 18:57:17.178999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.077 [2024-07-20 18:57:17.179064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:07.077 [2024-07-20 18:57:17.179066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.077 [2024-07-20 18:57:17.331591] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.077 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.334 Malloc1 00:27:07.334 [2024-07-20 18:57:17.421350] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.335 Malloc2 00:27:07.335 Malloc3 00:27:07.335 Malloc4 00:27:07.335 Malloc5 00:27:07.335 Malloc6 00:27:07.593 Malloc7 00:27:07.593 Malloc8 00:27:07.593 Malloc9 00:27:07.593 Malloc10 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1469625 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1469625 /var/tmp/bdevperf.sock 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1469625 ']' 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:07.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.593 { 00:27:07.593 "params": { 00:27:07.593 "name": "Nvme$subsystem", 00:27:07.593 "trtype": "$TEST_TRANSPORT", 00:27:07.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.593 "adrfam": "ipv4", 00:27:07.593 "trsvcid": "$NVMF_PORT", 00:27:07.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.593 "hdgst": ${hdgst:-false}, 00:27:07.593 "ddgst": ${ddgst:-false} 00:27:07.593 }, 00:27:07.593 "method": "bdev_nvme_attach_controller" 00:27:07.593 } 00:27:07.593 EOF 00:27:07.593 )") 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.593 { 00:27:07.593 "params": { 00:27:07.593 "name": "Nvme$subsystem", 00:27:07.593 "trtype": "$TEST_TRANSPORT", 00:27:07.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.593 "adrfam": "ipv4", 00:27:07.593 "trsvcid": "$NVMF_PORT", 00:27:07.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.593 "hdgst": ${hdgst:-false}, 00:27:07.593 "ddgst": ${ddgst:-false} 00:27:07.593 }, 00:27:07.593 "method": "bdev_nvme_attach_controller" 00:27:07.593 } 00:27:07.593 EOF 00:27:07.593 )") 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.593 { 00:27:07.593 "params": { 00:27:07.593 "name": "Nvme$subsystem", 00:27:07.593 "trtype": "$TEST_TRANSPORT", 00:27:07.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.593 "adrfam": "ipv4", 00:27:07.593 "trsvcid": "$NVMF_PORT", 00:27:07.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.593 "hdgst": ${hdgst:-false}, 00:27:07.593 "ddgst": ${ddgst:-false} 00:27:07.593 }, 00:27:07.593 "method": "bdev_nvme_attach_controller" 00:27:07.593 } 00:27:07.593 EOF 00:27:07.593 )") 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.593 { 00:27:07.593 "params": { 00:27:07.593 "name": "Nvme$subsystem", 00:27:07.593 "trtype": "$TEST_TRANSPORT", 00:27:07.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.593 "adrfam": "ipv4", 00:27:07.593 "trsvcid": "$NVMF_PORT", 00:27:07.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.593 "hdgst": ${hdgst:-false}, 00:27:07.593 "ddgst": ${ddgst:-false} 00:27:07.593 }, 00:27:07.593 "method": "bdev_nvme_attach_controller" 00:27:07.593 } 00:27:07.593 EOF 00:27:07.593 )") 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.593 { 00:27:07.593 "params": { 00:27:07.593 "name": "Nvme$subsystem", 00:27:07.593 "trtype": "$TEST_TRANSPORT", 00:27:07.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.593 "adrfam": "ipv4", 00:27:07.593 "trsvcid": "$NVMF_PORT", 00:27:07.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.593 "hdgst": ${hdgst:-false}, 00:27:07.593 "ddgst": ${ddgst:-false} 00:27:07.593 }, 00:27:07.593 "method": "bdev_nvme_attach_controller" 00:27:07.593 } 00:27:07.593 EOF 00:27:07.593 )") 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.593 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.593 { 00:27:07.593 "params": { 00:27:07.593 "name": "Nvme$subsystem", 00:27:07.593 "trtype": "$TEST_TRANSPORT", 00:27:07.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.593 "adrfam": "ipv4", 00:27:07.593 "trsvcid": "$NVMF_PORT", 00:27:07.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.593 "hdgst": ${hdgst:-false}, 00:27:07.593 "ddgst": ${ddgst:-false} 00:27:07.593 }, 00:27:07.593 "method": "bdev_nvme_attach_controller" 00:27:07.594 } 00:27:07.594 EOF 00:27:07.594 )") 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.594 { 00:27:07.594 "params": { 00:27:07.594 "name": "Nvme$subsystem", 00:27:07.594 "trtype": "$TEST_TRANSPORT", 00:27:07.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.594 "adrfam": "ipv4", 00:27:07.594 "trsvcid": "$NVMF_PORT", 00:27:07.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.594 "hdgst": ${hdgst:-false}, 00:27:07.594 "ddgst": ${ddgst:-false} 00:27:07.594 }, 00:27:07.594 "method": "bdev_nvme_attach_controller" 00:27:07.594 } 00:27:07.594 EOF 00:27:07.594 )") 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.594 { 00:27:07.594 "params": { 00:27:07.594 "name": "Nvme$subsystem", 00:27:07.594 "trtype": "$TEST_TRANSPORT", 00:27:07.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.594 "adrfam": "ipv4", 00:27:07.594 "trsvcid": "$NVMF_PORT", 00:27:07.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.594 "hdgst": ${hdgst:-false}, 00:27:07.594 "ddgst": ${ddgst:-false} 00:27:07.594 }, 00:27:07.594 "method": "bdev_nvme_attach_controller" 00:27:07.594 } 00:27:07.594 EOF 00:27:07.594 )") 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.594 { 00:27:07.594 "params": { 00:27:07.594 "name": "Nvme$subsystem", 00:27:07.594 "trtype": "$TEST_TRANSPORT", 00:27:07.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.594 "adrfam": "ipv4", 00:27:07.594 "trsvcid": "$NVMF_PORT", 00:27:07.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.594 "hdgst": ${hdgst:-false}, 00:27:07.594 "ddgst": ${ddgst:-false} 00:27:07.594 }, 00:27:07.594 "method": "bdev_nvme_attach_controller" 00:27:07.594 } 00:27:07.594 EOF 00:27:07.594 )") 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.594 { 00:27:07.594 "params": { 00:27:07.594 "name": "Nvme$subsystem", 00:27:07.594 "trtype": "$TEST_TRANSPORT", 00:27:07.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.594 "adrfam": "ipv4", 00:27:07.594 "trsvcid": "$NVMF_PORT", 00:27:07.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.594 "hdgst": ${hdgst:-false}, 00:27:07.594 "ddgst": ${ddgst:-false} 00:27:07.594 }, 00:27:07.594 "method": "bdev_nvme_attach_controller" 00:27:07.594 } 00:27:07.594 EOF 00:27:07.594 )") 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:07.594 18:57:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:07.594 "params": { 00:27:07.594 "name": "Nvme1", 00:27:07.594 "trtype": "tcp", 00:27:07.594 "traddr": "10.0.0.2", 00:27:07.594 "adrfam": "ipv4", 00:27:07.594 "trsvcid": "4420", 00:27:07.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:07.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:07.594 "hdgst": false, 00:27:07.594 "ddgst": false 00:27:07.594 }, 00:27:07.594 "method": "bdev_nvme_attach_controller" 00:27:07.594 },{ 00:27:07.594 "params": { 00:27:07.594 "name": "Nvme2", 00:27:07.594 "trtype": "tcp", 00:27:07.594 "traddr": "10.0.0.2", 00:27:07.594 "adrfam": "ipv4", 00:27:07.594 "trsvcid": "4420", 00:27:07.594 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:07.594 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:07.594 "hdgst": false, 00:27:07.594 "ddgst": false 00:27:07.594 }, 00:27:07.594 "method": "bdev_nvme_attach_controller" 00:27:07.594 },{ 00:27:07.594 "params": { 00:27:07.594 "name": "Nvme3", 00:27:07.594 "trtype": "tcp", 00:27:07.594 "traddr": "10.0.0.2", 00:27:07.594 "adrfam": "ipv4", 00:27:07.594 "trsvcid": "4420", 00:27:07.594 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:07.594 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:07.594 "hdgst": false, 00:27:07.594 "ddgst": false 00:27:07.594 }, 00:27:07.594 "method": "bdev_nvme_attach_controller" 00:27:07.594 },{ 00:27:07.594 "params": { 00:27:07.594 "name": "Nvme4", 00:27:07.594 "trtype": "tcp", 00:27:07.594 "traddr": "10.0.0.2", 00:27:07.594 "adrfam": "ipv4", 00:27:07.594 "trsvcid": "4420", 00:27:07.594 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:07.594 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:07.594 "hdgst": false, 00:27:07.594 "ddgst": false 00:27:07.594 }, 00:27:07.594 "method": "bdev_nvme_attach_controller" 00:27:07.594 },{ 00:27:07.594 "params": { 00:27:07.594 "name": "Nvme5", 00:27:07.594 "trtype": "tcp", 00:27:07.594 "traddr": "10.0.0.2", 00:27:07.594 "adrfam": "ipv4", 00:27:07.594 "trsvcid": "4420", 00:27:07.594 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:07.594 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:07.594 "hdgst": false, 00:27:07.594 "ddgst": false 00:27:07.594 }, 00:27:07.594 "method": "bdev_nvme_attach_controller" 00:27:07.594 },{ 00:27:07.594 "params": { 00:27:07.594 "name": "Nvme6", 00:27:07.594 "trtype": "tcp", 00:27:07.594 "traddr": "10.0.0.2", 00:27:07.594 "adrfam": "ipv4", 00:27:07.594 "trsvcid": "4420", 00:27:07.594 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:07.594 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:07.594 "hdgst": false, 00:27:07.594 "ddgst": false 00:27:07.594 }, 00:27:07.594 "method": "bdev_nvme_attach_controller" 00:27:07.594 },{ 00:27:07.594 "params": { 00:27:07.594 "name": "Nvme7", 00:27:07.594 "trtype": "tcp", 00:27:07.594 "traddr": "10.0.0.2", 00:27:07.594 "adrfam": "ipv4", 00:27:07.594 "trsvcid": "4420", 00:27:07.594 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:07.594 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:07.594 "hdgst": false, 00:27:07.594 "ddgst": false 00:27:07.594 }, 00:27:07.594 "method": "bdev_nvme_attach_controller" 00:27:07.594 },{ 00:27:07.594 "params": { 00:27:07.594 "name": "Nvme8", 00:27:07.594 "trtype": "tcp", 00:27:07.594 "traddr": "10.0.0.2", 00:27:07.594 "adrfam": "ipv4", 00:27:07.594 "trsvcid": "4420", 00:27:07.594 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:07.594 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:07.594 "hdgst": false, 00:27:07.594 "ddgst": false 00:27:07.594 }, 00:27:07.594 "method": "bdev_nvme_attach_controller" 00:27:07.594 },{ 00:27:07.594 "params": { 00:27:07.594 "name": "Nvme9", 00:27:07.594 "trtype": "tcp", 00:27:07.594 "traddr": "10.0.0.2", 00:27:07.594 "adrfam": "ipv4", 00:27:07.594 "trsvcid": "4420", 00:27:07.594 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:07.594 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:07.594 "hdgst": false, 00:27:07.594 "ddgst": false 00:27:07.594 }, 00:27:07.594 "method": "bdev_nvme_attach_controller" 00:27:07.594 },{ 00:27:07.594 "params": { 00:27:07.594 "name": "Nvme10", 00:27:07.594 "trtype": "tcp", 00:27:07.594 "traddr": "10.0.0.2", 00:27:07.594 "adrfam": "ipv4", 00:27:07.594 "trsvcid": "4420", 00:27:07.594 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:07.594 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:07.594 "hdgst": false, 00:27:07.594 "ddgst": false 00:27:07.594 }, 00:27:07.594 "method": "bdev_nvme_attach_controller" 00:27:07.594 }' 00:27:07.851 [2024-07-20 18:57:17.916522] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:07.851 [2024-07-20 18:57:17.916600] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:07.851 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.852 [2024-07-20 18:57:17.981773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.852 [2024-07-20 18:57:18.068205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.748 18:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:09.748 18:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:09.748 18:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:09.748 18:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.748 18:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:09.748 18:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.748 18:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1469625 00:27:09.748 18:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:09.748 18:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:10.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1469625 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:10.683 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1469442 00:27:10.683 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:10.683 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:10.683 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:10.683 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:10.683 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:10.683 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:10.683 { 00:27:10.683 "params": { 00:27:10.683 "name": "Nvme$subsystem", 00:27:10.683 "trtype": "$TEST_TRANSPORT", 00:27:10.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.683 "adrfam": "ipv4", 00:27:10.683 "trsvcid": "$NVMF_PORT", 00:27:10.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.683 "hdgst": ${hdgst:-false}, 00:27:10.683 "ddgst": ${ddgst:-false} 00:27:10.683 }, 00:27:10.683 "method": "bdev_nvme_attach_controller" 00:27:10.683 } 00:27:10.683 EOF 00:27:10.683 )") 00:27:10.683 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:10.683 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:10.683 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:10.683 { 00:27:10.683 "params": { 00:27:10.683 "name": "Nvme$subsystem", 00:27:10.683 "trtype": "$TEST_TRANSPORT", 00:27:10.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.683 "adrfam": "ipv4", 00:27:10.683 "trsvcid": "$NVMF_PORT", 00:27:10.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.683 "hdgst": ${hdgst:-false}, 00:27:10.683 "ddgst": ${ddgst:-false} 00:27:10.683 }, 00:27:10.683 "method": "bdev_nvme_attach_controller" 00:27:10.683 } 00:27:10.683 EOF 00:27:10.683 )") 00:27:10.683 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:10.683 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:10.683 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:10.683 { 00:27:10.683 "params": { 00:27:10.683 "name": "Nvme$subsystem", 00:27:10.683 "trtype": "$TEST_TRANSPORT", 00:27:10.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.684 "adrfam": "ipv4", 00:27:10.684 "trsvcid": "$NVMF_PORT", 00:27:10.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.684 "hdgst": ${hdgst:-false}, 00:27:10.684 "ddgst": ${ddgst:-false} 00:27:10.684 }, 00:27:10.684 "method": "bdev_nvme_attach_controller" 00:27:10.684 } 00:27:10.684 EOF 00:27:10.684 )") 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:10.684 { 00:27:10.684 "params": { 00:27:10.684 "name": "Nvme$subsystem", 00:27:10.684 "trtype": "$TEST_TRANSPORT", 00:27:10.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.684 "adrfam": "ipv4", 00:27:10.684 "trsvcid": "$NVMF_PORT", 00:27:10.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.684 "hdgst": ${hdgst:-false}, 00:27:10.684 "ddgst": ${ddgst:-false} 00:27:10.684 }, 00:27:10.684 "method": "bdev_nvme_attach_controller" 00:27:10.684 } 00:27:10.684 EOF 00:27:10.684 )") 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:10.684 { 00:27:10.684 "params": { 00:27:10.684 "name": "Nvme$subsystem", 00:27:10.684 "trtype": "$TEST_TRANSPORT", 00:27:10.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.684 "adrfam": "ipv4", 00:27:10.684 "trsvcid": "$NVMF_PORT", 00:27:10.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.684 "hdgst": ${hdgst:-false}, 00:27:10.684 "ddgst": ${ddgst:-false} 00:27:10.684 }, 00:27:10.684 "method": "bdev_nvme_attach_controller" 00:27:10.684 } 00:27:10.684 EOF 00:27:10.684 )") 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:10.684 { 00:27:10.684 "params": { 00:27:10.684 "name": "Nvme$subsystem", 00:27:10.684 "trtype": "$TEST_TRANSPORT", 00:27:10.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.684 "adrfam": "ipv4", 00:27:10.684 "trsvcid": "$NVMF_PORT", 00:27:10.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.684 "hdgst": ${hdgst:-false}, 00:27:10.684 "ddgst": ${ddgst:-false} 00:27:10.684 }, 00:27:10.684 "method": "bdev_nvme_attach_controller" 00:27:10.684 } 00:27:10.684 EOF 00:27:10.684 )") 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:10.684 { 00:27:10.684 "params": { 00:27:10.684 "name": "Nvme$subsystem", 00:27:10.684 "trtype": "$TEST_TRANSPORT", 00:27:10.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.684 "adrfam": "ipv4", 00:27:10.684 "trsvcid": "$NVMF_PORT", 00:27:10.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.684 "hdgst": ${hdgst:-false}, 00:27:10.684 "ddgst": ${ddgst:-false} 00:27:10.684 }, 00:27:10.684 "method": "bdev_nvme_attach_controller" 00:27:10.684 } 00:27:10.684 EOF 00:27:10.684 )") 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:10.684 { 00:27:10.684 "params": { 00:27:10.684 "name": "Nvme$subsystem", 00:27:10.684 "trtype": "$TEST_TRANSPORT", 00:27:10.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.684 "adrfam": "ipv4", 00:27:10.684 "trsvcid": "$NVMF_PORT", 00:27:10.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.684 "hdgst": ${hdgst:-false}, 00:27:10.684 "ddgst": ${ddgst:-false} 00:27:10.684 }, 00:27:10.684 "method": "bdev_nvme_attach_controller" 00:27:10.684 } 00:27:10.684 EOF 00:27:10.684 )") 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:10.684 { 00:27:10.684 "params": { 00:27:10.684 "name": "Nvme$subsystem", 00:27:10.684 "trtype": "$TEST_TRANSPORT", 00:27:10.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.684 "adrfam": "ipv4", 00:27:10.684 "trsvcid": "$NVMF_PORT", 00:27:10.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.684 "hdgst": ${hdgst:-false}, 00:27:10.684 "ddgst": ${ddgst:-false} 00:27:10.684 }, 00:27:10.684 "method": "bdev_nvme_attach_controller" 00:27:10.684 } 00:27:10.684 EOF 00:27:10.684 )") 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:10.684 { 00:27:10.684 "params": { 00:27:10.684 "name": "Nvme$subsystem", 00:27:10.684 "trtype": "$TEST_TRANSPORT", 00:27:10.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.684 "adrfam": "ipv4", 00:27:10.684 "trsvcid": "$NVMF_PORT", 00:27:10.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.684 "hdgst": ${hdgst:-false}, 00:27:10.684 "ddgst": ${ddgst:-false} 00:27:10.684 }, 00:27:10.684 "method": "bdev_nvme_attach_controller" 00:27:10.684 } 00:27:10.684 EOF 00:27:10.684 )") 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:10.684 18:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:10.684 "params": { 00:27:10.684 "name": "Nvme1", 00:27:10.684 "trtype": "tcp", 00:27:10.684 "traddr": "10.0.0.2", 00:27:10.684 "adrfam": "ipv4", 00:27:10.684 "trsvcid": "4420", 00:27:10.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:10.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:10.684 "hdgst": false, 00:27:10.684 "ddgst": false 00:27:10.684 }, 00:27:10.684 "method": "bdev_nvme_attach_controller" 00:27:10.684 },{ 00:27:10.684 "params": { 00:27:10.684 "name": "Nvme2", 00:27:10.684 "trtype": "tcp", 00:27:10.684 "traddr": "10.0.0.2", 00:27:10.684 "adrfam": "ipv4", 00:27:10.684 "trsvcid": "4420", 00:27:10.684 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:10.684 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:10.684 "hdgst": false, 00:27:10.684 "ddgst": false 00:27:10.684 }, 00:27:10.684 "method": "bdev_nvme_attach_controller" 00:27:10.684 },{ 00:27:10.684 "params": { 00:27:10.684 "name": "Nvme3", 00:27:10.684 "trtype": "tcp", 00:27:10.684 "traddr": "10.0.0.2", 00:27:10.684 "adrfam": "ipv4", 00:27:10.684 "trsvcid": "4420", 00:27:10.684 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:10.684 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:10.684 "hdgst": false, 00:27:10.684 "ddgst": false 00:27:10.684 }, 00:27:10.684 "method": "bdev_nvme_attach_controller" 00:27:10.684 },{ 00:27:10.684 "params": { 00:27:10.684 "name": "Nvme4", 00:27:10.684 "trtype": "tcp", 00:27:10.684 "traddr": "10.0.0.2", 00:27:10.684 "adrfam": "ipv4", 00:27:10.684 "trsvcid": "4420", 00:27:10.684 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:10.684 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:10.684 "hdgst": false, 00:27:10.684 "ddgst": false 00:27:10.684 }, 00:27:10.684 "method": "bdev_nvme_attach_controller" 00:27:10.684 },{ 00:27:10.684 "params": { 00:27:10.684 "name": "Nvme5", 00:27:10.684 "trtype": "tcp", 00:27:10.684 "traddr": "10.0.0.2", 00:27:10.684 "adrfam": "ipv4", 00:27:10.684 "trsvcid": "4420", 00:27:10.684 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:10.684 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:10.684 "hdgst": false, 00:27:10.684 "ddgst": false 00:27:10.684 }, 00:27:10.684 "method": "bdev_nvme_attach_controller" 00:27:10.684 },{ 00:27:10.684 "params": { 00:27:10.684 "name": "Nvme6", 00:27:10.684 "trtype": "tcp", 00:27:10.684 "traddr": "10.0.0.2", 00:27:10.684 "adrfam": "ipv4", 00:27:10.684 "trsvcid": "4420", 00:27:10.684 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:10.684 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:10.684 "hdgst": false, 00:27:10.684 "ddgst": false 00:27:10.684 }, 00:27:10.684 "method": "bdev_nvme_attach_controller" 00:27:10.684 },{ 00:27:10.684 "params": { 00:27:10.684 "name": "Nvme7", 00:27:10.684 "trtype": "tcp", 00:27:10.684 "traddr": "10.0.0.2", 00:27:10.684 "adrfam": "ipv4", 00:27:10.684 "trsvcid": "4420", 00:27:10.684 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:10.684 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:10.685 "hdgst": false, 00:27:10.685 "ddgst": false 00:27:10.685 }, 00:27:10.685 "method": "bdev_nvme_attach_controller" 00:27:10.685 },{ 00:27:10.685 "params": { 00:27:10.685 "name": "Nvme8", 00:27:10.685 "trtype": "tcp", 00:27:10.685 "traddr": "10.0.0.2", 00:27:10.685 "adrfam": "ipv4", 00:27:10.685 "trsvcid": "4420", 00:27:10.685 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:10.685 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:10.685 "hdgst": false, 00:27:10.685 "ddgst": false 00:27:10.685 }, 00:27:10.685 "method": "bdev_nvme_attach_controller" 00:27:10.685 },{ 00:27:10.685 "params": { 00:27:10.685 "name": "Nvme9", 00:27:10.685 "trtype": "tcp", 00:27:10.685 "traddr": "10.0.0.2", 00:27:10.685 "adrfam": "ipv4", 00:27:10.685 "trsvcid": "4420", 00:27:10.685 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:10.685 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:10.685 "hdgst": false, 00:27:10.685 "ddgst": false 00:27:10.685 }, 00:27:10.685 "method": "bdev_nvme_attach_controller" 00:27:10.685 },{ 00:27:10.685 "params": { 00:27:10.685 "name": "Nvme10", 00:27:10.685 "trtype": "tcp", 00:27:10.685 "traddr": "10.0.0.2", 00:27:10.685 "adrfam": "ipv4", 00:27:10.685 "trsvcid": "4420", 00:27:10.685 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:10.685 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:10.685 "hdgst": false, 00:27:10.685 "ddgst": false 00:27:10.685 }, 00:27:10.685 "method": "bdev_nvme_attach_controller" 00:27:10.685 }' 00:27:10.685 [2024-07-20 18:57:20.926107] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:10.685 [2024-07-20 18:57:20.926198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470038 ] 00:27:10.685 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.685 [2024-07-20 18:57:20.992566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.943 [2024-07-20 18:57:21.083505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.853 Running I/O for 1 seconds... 00:27:13.784 00:27:13.784 Latency(us) 00:27:13.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.784 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.784 Verification LBA range: start 0x0 length 0x400 00:27:13.784 Nvme1n1 : 1.04 246.31 15.39 0.00 0.00 256457.77 26602.76 250104.79 00:27:13.784 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.784 Verification LBA range: start 0x0 length 0x400 00:27:13.784 Nvme2n1 : 1.04 308.41 19.28 0.00 0.00 200811.56 23107.51 198064.36 00:27:13.784 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.784 Verification LBA range: start 0x0 length 0x400 00:27:13.784 Nvme3n1 : 1.18 217.81 13.61 0.00 0.00 281859.60 23495.87 292047.83 00:27:13.784 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.784 Verification LBA range: start 0x0 length 0x400 00:27:13.784 Nvme4n1 : 1.13 117.61 7.35 0.00 0.00 497004.51 18058.81 400789.05 00:27:13.784 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.784 Verification LBA range: start 0x0 length 0x400 00:27:13.784 Nvme5n1 : 1.16 165.84 10.37 0.00 0.00 356536.83 24660.95 372827.02 00:27:13.784 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.784 Verification LBA range: start 0x0 length 0x400 00:27:13.784 Nvme6n1 : 1.11 230.40 14.40 0.00 0.00 251522.84 23592.96 251658.24 00:27:13.784 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.784 Verification LBA range: start 0x0 length 0x400 00:27:13.784 Nvme7n1 : 1.23 207.72 12.98 0.00 0.00 268496.59 24855.13 296708.17 00:27:13.784 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.784 Verification LBA range: start 0x0 length 0x400 00:27:13.784 Nvme8n1 : 1.15 166.98 10.44 0.00 0.00 337293.46 36311.80 416323.51 00:27:13.784 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.784 Verification LBA range: start 0x0 length 0x400 00:27:13.784 Nvme9n1 : 1.19 214.39 13.40 0.00 0.00 259601.07 25437.68 299815.06 00:27:13.784 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.784 Verification LBA range: start 0x0 length 0x400 00:27:13.784 Nvme10n1 : 1.18 325.19 20.32 0.00 0.00 167852.31 19903.53 195734.19 00:27:13.784 =================================================================================================================== 00:27:13.784 Total : 2200.67 137.54 0.00 0.00 266063.21 18058.81 416323.51 00:27:14.042 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:14.042 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:14.042 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:14.042 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:14.042 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:14.042 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:14.042 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:14.042 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:14.042 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:14.042 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:14.042 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:14.042 rmmod nvme_tcp 00:27:14.299 rmmod nvme_fabrics 00:27:14.299 rmmod nvme_keyring 00:27:14.299 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:14.299 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:14.299 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:14.299 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1469442 ']' 00:27:14.299 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1469442 00:27:14.299 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 1469442 ']' 00:27:14.299 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 1469442 00:27:14.299 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:14.299 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:14.299 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1469442 00:27:14.299 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:14.299 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:14.299 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1469442' 00:27:14.299 killing process with pid 1469442 00:27:14.299 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 1469442 00:27:14.300 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 1469442 00:27:14.864 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:14.864 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:14.864 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:14.864 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:14.864 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:14.864 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.864 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:14.864 18:57:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.764 18:57:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:16.764 00:27:16.764 real 0m12.090s 00:27:16.764 user 0m35.429s 00:27:16.764 sys 0m3.256s 00:27:16.764 18:57:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:16.764 18:57:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:16.764 ************************************ 00:27:16.764 END TEST nvmf_shutdown_tc1 00:27:16.764 ************************************ 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:16.764 ************************************ 00:27:16.764 START TEST nvmf_shutdown_tc2 00:27:16.764 ************************************ 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:16.764 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:16.765 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:16.765 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:16.765 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:16.765 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.765 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.023 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:17.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:27:17.024 00:27:17.024 --- 10.0.0.2 ping statistics --- 00:27:17.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.024 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:27:17.024 00:27:17.024 --- 10.0.0.1 ping statistics --- 00:27:17.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.024 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1470812 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1470812 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1470812 ']' 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:17.024 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.024 [2024-07-20 18:57:27.270544] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:17.024 [2024-07-20 18:57:27.270626] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.024 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.024 [2024-07-20 18:57:27.337156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.283 [2024-07-20 18:57:27.428758] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.283 [2024-07-20 18:57:27.428851] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.283 [2024-07-20 18:57:27.428867] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.283 [2024-07-20 18:57:27.428878] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.283 [2024-07-20 18:57:27.428888] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.283 [2024-07-20 18:57:27.428987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.283 [2024-07-20 18:57:27.429059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.283 [2024-07-20 18:57:27.429129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:17.283 [2024-07-20 18:57:27.429131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.283 [2024-07-20 18:57:27.565366] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.283 18:57:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.541 Malloc1 00:27:17.541 [2024-07-20 18:57:27.640695] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.541 Malloc2 00:27:17.541 Malloc3 00:27:17.541 Malloc4 00:27:17.541 Malloc5 00:27:17.541 Malloc6 00:27:17.800 Malloc7 00:27:17.800 Malloc8 00:27:17.800 Malloc9 00:27:17.800 Malloc10 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1470988 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1470988 /var/tmp/bdevperf.sock 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1470988 ']' 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:17.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.800 { 00:27:17.800 "params": { 00:27:17.800 "name": "Nvme$subsystem", 00:27:17.800 "trtype": "$TEST_TRANSPORT", 00:27:17.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.800 "adrfam": "ipv4", 00:27:17.800 "trsvcid": "$NVMF_PORT", 00:27:17.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.800 "hdgst": ${hdgst:-false}, 00:27:17.800 "ddgst": ${ddgst:-false} 00:27:17.800 }, 00:27:17.800 "method": "bdev_nvme_attach_controller" 00:27:17.800 } 00:27:17.800 EOF 00:27:17.800 )") 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.800 { 00:27:17.800 "params": { 00:27:17.800 "name": "Nvme$subsystem", 00:27:17.800 "trtype": "$TEST_TRANSPORT", 00:27:17.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.800 "adrfam": "ipv4", 00:27:17.800 "trsvcid": "$NVMF_PORT", 00:27:17.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.800 "hdgst": ${hdgst:-false}, 00:27:17.800 "ddgst": ${ddgst:-false} 00:27:17.800 }, 00:27:17.800 "method": "bdev_nvme_attach_controller" 00:27:17.800 } 00:27:17.800 EOF 00:27:17.800 )") 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.800 { 00:27:17.800 "params": { 00:27:17.800 "name": "Nvme$subsystem", 00:27:17.800 "trtype": "$TEST_TRANSPORT", 00:27:17.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.800 "adrfam": "ipv4", 00:27:17.800 "trsvcid": "$NVMF_PORT", 00:27:17.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.800 "hdgst": ${hdgst:-false}, 00:27:17.800 "ddgst": ${ddgst:-false} 00:27:17.800 }, 00:27:17.800 "method": "bdev_nvme_attach_controller" 00:27:17.800 } 00:27:17.800 EOF 00:27:17.800 )") 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.800 { 00:27:17.800 "params": { 00:27:17.800 "name": "Nvme$subsystem", 00:27:17.800 "trtype": "$TEST_TRANSPORT", 00:27:17.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.800 "adrfam": "ipv4", 00:27:17.800 "trsvcid": "$NVMF_PORT", 00:27:17.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.800 "hdgst": ${hdgst:-false}, 00:27:17.800 "ddgst": ${ddgst:-false} 00:27:17.800 }, 00:27:17.800 "method": "bdev_nvme_attach_controller" 00:27:17.800 } 00:27:17.800 EOF 00:27:17.800 )") 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.800 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.800 { 00:27:17.800 "params": { 00:27:17.800 "name": "Nvme$subsystem", 00:27:17.800 "trtype": "$TEST_TRANSPORT", 00:27:17.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.801 "adrfam": "ipv4", 00:27:17.801 "trsvcid": "$NVMF_PORT", 00:27:17.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.801 "hdgst": ${hdgst:-false}, 00:27:17.801 "ddgst": ${ddgst:-false} 00:27:17.801 }, 00:27:17.801 "method": "bdev_nvme_attach_controller" 00:27:17.801 } 00:27:17.801 EOF 00:27:17.801 )") 00:27:17.801 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:17.801 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.801 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.801 { 00:27:17.801 "params": { 00:27:17.801 "name": "Nvme$subsystem", 00:27:17.801 "trtype": "$TEST_TRANSPORT", 00:27:17.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.801 "adrfam": "ipv4", 00:27:17.801 "trsvcid": "$NVMF_PORT", 00:27:17.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.801 "hdgst": ${hdgst:-false}, 00:27:17.801 "ddgst": ${ddgst:-false} 00:27:17.801 }, 00:27:17.801 "method": "bdev_nvme_attach_controller" 00:27:17.801 } 00:27:17.801 EOF 00:27:17.801 )") 00:27:17.801 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:18.060 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.060 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.060 { 00:27:18.060 "params": { 00:27:18.060 "name": "Nvme$subsystem", 00:27:18.060 "trtype": "$TEST_TRANSPORT", 00:27:18.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.060 "adrfam": "ipv4", 00:27:18.060 "trsvcid": "$NVMF_PORT", 00:27:18.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.060 "hdgst": ${hdgst:-false}, 00:27:18.060 "ddgst": ${ddgst:-false} 00:27:18.060 }, 00:27:18.060 "method": "bdev_nvme_attach_controller" 00:27:18.060 } 00:27:18.060 EOF 00:27:18.060 )") 00:27:18.060 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:18.060 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.060 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.060 { 00:27:18.060 "params": { 00:27:18.060 "name": "Nvme$subsystem", 00:27:18.060 "trtype": "$TEST_TRANSPORT", 00:27:18.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.060 "adrfam": "ipv4", 00:27:18.060 "trsvcid": "$NVMF_PORT", 00:27:18.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.060 "hdgst": ${hdgst:-false}, 00:27:18.060 "ddgst": ${ddgst:-false} 00:27:18.060 }, 00:27:18.060 "method": "bdev_nvme_attach_controller" 00:27:18.060 } 00:27:18.060 EOF 00:27:18.060 )") 00:27:18.060 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:18.060 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.060 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.060 { 00:27:18.060 "params": { 00:27:18.060 "name": "Nvme$subsystem", 00:27:18.060 "trtype": "$TEST_TRANSPORT", 00:27:18.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.060 "adrfam": "ipv4", 00:27:18.060 "trsvcid": "$NVMF_PORT", 00:27:18.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.060 "hdgst": ${hdgst:-false}, 00:27:18.060 "ddgst": ${ddgst:-false} 00:27:18.060 }, 00:27:18.060 "method": "bdev_nvme_attach_controller" 00:27:18.060 } 00:27:18.060 EOF 00:27:18.060 )") 00:27:18.060 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:18.060 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.060 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.060 { 00:27:18.060 "params": { 00:27:18.060 "name": "Nvme$subsystem", 00:27:18.060 "trtype": "$TEST_TRANSPORT", 00:27:18.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.060 "adrfam": "ipv4", 00:27:18.060 "trsvcid": "$NVMF_PORT", 00:27:18.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.060 "hdgst": ${hdgst:-false}, 00:27:18.060 "ddgst": ${ddgst:-false} 00:27:18.060 }, 00:27:18.060 "method": "bdev_nvme_attach_controller" 00:27:18.060 } 00:27:18.060 EOF 00:27:18.060 )") 00:27:18.060 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:18.060 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:18.060 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:18.060 18:57:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:18.060 "params": { 00:27:18.060 "name": "Nvme1", 00:27:18.060 "trtype": "tcp", 00:27:18.060 "traddr": "10.0.0.2", 00:27:18.060 "adrfam": "ipv4", 00:27:18.060 "trsvcid": "4420", 00:27:18.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:18.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:18.060 "hdgst": false, 00:27:18.060 "ddgst": false 00:27:18.060 }, 00:27:18.060 "method": "bdev_nvme_attach_controller" 00:27:18.060 },{ 00:27:18.060 "params": { 00:27:18.060 "name": "Nvme2", 00:27:18.060 "trtype": "tcp", 00:27:18.060 "traddr": "10.0.0.2", 00:27:18.060 "adrfam": "ipv4", 00:27:18.060 "trsvcid": "4420", 00:27:18.060 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:18.060 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:18.060 "hdgst": false, 00:27:18.060 "ddgst": false 00:27:18.060 }, 00:27:18.060 "method": "bdev_nvme_attach_controller" 00:27:18.060 },{ 00:27:18.060 "params": { 00:27:18.060 "name": "Nvme3", 00:27:18.060 "trtype": "tcp", 00:27:18.060 "traddr": "10.0.0.2", 00:27:18.060 "adrfam": "ipv4", 00:27:18.060 "trsvcid": "4420", 00:27:18.060 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:18.060 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:18.060 "hdgst": false, 00:27:18.060 "ddgst": false 00:27:18.060 }, 00:27:18.060 "method": "bdev_nvme_attach_controller" 00:27:18.060 },{ 00:27:18.060 "params": { 00:27:18.060 "name": "Nvme4", 00:27:18.060 "trtype": "tcp", 00:27:18.060 "traddr": "10.0.0.2", 00:27:18.060 "adrfam": "ipv4", 00:27:18.060 "trsvcid": "4420", 00:27:18.060 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:18.060 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:18.060 "hdgst": false, 00:27:18.060 "ddgst": false 00:27:18.060 }, 00:27:18.060 "method": "bdev_nvme_attach_controller" 00:27:18.060 },{ 00:27:18.060 "params": { 00:27:18.060 "name": "Nvme5", 00:27:18.060 "trtype": "tcp", 00:27:18.060 "traddr": "10.0.0.2", 00:27:18.060 "adrfam": "ipv4", 00:27:18.060 "trsvcid": "4420", 00:27:18.060 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:18.060 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:18.060 "hdgst": false, 00:27:18.060 "ddgst": false 00:27:18.060 }, 00:27:18.060 "method": "bdev_nvme_attach_controller" 00:27:18.060 },{ 00:27:18.060 "params": { 00:27:18.060 "name": "Nvme6", 00:27:18.060 "trtype": "tcp", 00:27:18.060 "traddr": "10.0.0.2", 00:27:18.060 "adrfam": "ipv4", 00:27:18.060 "trsvcid": "4420", 00:27:18.060 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:18.060 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:18.060 "hdgst": false, 00:27:18.060 "ddgst": false 00:27:18.060 }, 00:27:18.060 "method": "bdev_nvme_attach_controller" 00:27:18.060 },{ 00:27:18.060 "params": { 00:27:18.060 "name": "Nvme7", 00:27:18.060 "trtype": "tcp", 00:27:18.060 "traddr": "10.0.0.2", 00:27:18.060 "adrfam": "ipv4", 00:27:18.060 "trsvcid": "4420", 00:27:18.060 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:18.060 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:18.060 "hdgst": false, 00:27:18.060 "ddgst": false 00:27:18.060 }, 00:27:18.060 "method": "bdev_nvme_attach_controller" 00:27:18.060 },{ 00:27:18.060 "params": { 00:27:18.060 "name": "Nvme8", 00:27:18.060 "trtype": "tcp", 00:27:18.060 "traddr": "10.0.0.2", 00:27:18.060 "adrfam": "ipv4", 00:27:18.060 "trsvcid": "4420", 00:27:18.060 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:18.060 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:18.060 "hdgst": false, 00:27:18.060 "ddgst": false 00:27:18.060 }, 00:27:18.060 "method": "bdev_nvme_attach_controller" 00:27:18.060 },{ 00:27:18.060 "params": { 00:27:18.060 "name": "Nvme9", 00:27:18.060 "trtype": "tcp", 00:27:18.060 "traddr": "10.0.0.2", 00:27:18.060 "adrfam": "ipv4", 00:27:18.060 "trsvcid": "4420", 00:27:18.060 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:18.060 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:18.060 "hdgst": false, 00:27:18.060 "ddgst": false 00:27:18.060 }, 00:27:18.060 "method": "bdev_nvme_attach_controller" 00:27:18.060 },{ 00:27:18.060 "params": { 00:27:18.060 "name": "Nvme10", 00:27:18.060 "trtype": "tcp", 00:27:18.060 "traddr": "10.0.0.2", 00:27:18.060 "adrfam": "ipv4", 00:27:18.060 "trsvcid": "4420", 00:27:18.060 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:18.060 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:18.060 "hdgst": false, 00:27:18.060 "ddgst": false 00:27:18.060 }, 00:27:18.060 "method": "bdev_nvme_attach_controller" 00:27:18.060 }' 00:27:18.060 [2024-07-20 18:57:28.147345] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:18.060 [2024-07-20 18:57:28.147434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470988 ] 00:27:18.060 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.060 [2024-07-20 18:57:28.211195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.060 [2024-07-20 18:57:28.297548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.464 Running I/O for 10 seconds... 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:20.046 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=130 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 130 -ge 100 ']' 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1470988 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1470988 ']' 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1470988 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1470988 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1470988' 00:27:20.303 killing process with pid 1470988 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1470988 00:27:20.303 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1470988 00:27:20.560 Received shutdown signal, test time was about 0.950618 seconds 00:27:20.560 00:27:20.560 Latency(us) 00:27:20.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.561 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.561 Verification LBA range: start 0x0 length 0x400 00:27:20.561 Nvme1n1 : 0.90 214.32 13.40 0.00 0.00 294977.74 25049.32 285834.05 00:27:20.561 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.561 Verification LBA range: start 0x0 length 0x400 00:27:20.561 Nvme2n1 : 0.85 233.30 14.58 0.00 0.00 262078.29 9514.86 257872.02 00:27:20.561 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.561 Verification LBA range: start 0x0 length 0x400 00:27:20.561 Nvme3n1 : 0.83 230.95 14.43 0.00 0.00 260975.88 26796.94 262532.36 00:27:20.561 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.561 Verification LBA range: start 0x0 length 0x400 00:27:20.561 Nvme4n1 : 0.89 215.43 13.46 0.00 0.00 275253.22 24563.86 250104.79 00:27:20.561 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.561 Verification LBA range: start 0x0 length 0x400 00:27:20.561 Nvme5n1 : 0.95 202.15 12.63 0.00 0.00 276108.33 23787.14 285834.05 00:27:20.561 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.561 Verification LBA range: start 0x0 length 0x400 00:27:20.561 Nvme6n1 : 0.88 292.14 18.26 0.00 0.00 193620.01 24175.50 237677.23 00:27:20.561 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.561 Verification LBA range: start 0x0 length 0x400 00:27:20.561 Nvme7n1 : 0.85 225.23 14.08 0.00 0.00 244393.66 42331.40 234570.33 00:27:20.561 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.561 Verification LBA range: start 0x0 length 0x400 00:27:20.561 Nvme8n1 : 0.87 146.57 9.16 0.00 0.00 364137.43 44079.03 298261.62 00:27:20.561 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.561 Verification LBA range: start 0x0 length 0x400 00:27:20.561 Nvme9n1 : 0.90 213.32 13.33 0.00 0.00 248828.97 22816.24 265639.25 00:27:20.561 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.561 Verification LBA range: start 0x0 length 0x400 00:27:20.561 Nvme10n1 : 0.86 223.59 13.97 0.00 0.00 229265.51 23592.96 229910.00 00:27:20.561 =================================================================================================================== 00:27:20.561 Total : 2197.02 137.31 0.00 0.00 259288.71 9514.86 298261.62 00:27:20.818 18:57:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:21.747 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1470812 00:27:21.747 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:21.748 rmmod nvme_tcp 00:27:21.748 rmmod nvme_fabrics 00:27:21.748 rmmod nvme_keyring 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1470812 ']' 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1470812 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1470812 ']' 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1470812 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1470812 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1470812' 00:27:21.748 killing process with pid 1470812 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1470812 00:27:21.748 18:57:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1470812 00:27:22.314 18:57:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:22.314 18:57:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:22.314 18:57:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:22.314 18:57:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:22.314 18:57:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:22.314 18:57:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.314 18:57:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.314 18:57:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:24.839 00:27:24.839 real 0m7.518s 00:27:24.839 user 0m21.759s 00:27:24.839 sys 0m1.534s 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.839 ************************************ 00:27:24.839 END TEST nvmf_shutdown_tc2 00:27:24.839 ************************************ 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:24.839 ************************************ 00:27:24.839 START TEST nvmf_shutdown_tc3 00:27:24.839 ************************************ 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:24.839 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:24.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:24.839 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:24.839 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:24.839 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:24.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:27:24.840 00:27:24.840 --- 10.0.0.2 ping statistics --- 00:27:24.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.840 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:27:24.840 00:27:24.840 --- 10.0.0.1 ping statistics --- 00:27:24.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.840 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1471898 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1471898 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1471898 ']' 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:24.840 18:57:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:24.840 [2024-07-20 18:57:34.837962] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:24.840 [2024-07-20 18:57:34.838039] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.840 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.840 [2024-07-20 18:57:34.902867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.840 [2024-07-20 18:57:34.989061] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.840 [2024-07-20 18:57:34.989136] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.840 [2024-07-20 18:57:34.989159] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.840 [2024-07-20 18:57:34.989170] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.840 [2024-07-20 18:57:34.989185] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.840 [2024-07-20 18:57:34.989273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.840 [2024-07-20 18:57:34.989336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:24.840 [2024-07-20 18:57:34.989405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.840 [2024-07-20 18:57:34.989403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:24.840 [2024-07-20 18:57:35.124314] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.840 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.097 Malloc1 00:27:25.097 [2024-07-20 18:57:35.199457] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.097 Malloc2 00:27:25.097 Malloc3 00:27:25.097 Malloc4 00:27:25.097 Malloc5 00:27:25.097 Malloc6 00:27:25.355 Malloc7 00:27:25.355 Malloc8 00:27:25.355 Malloc9 00:27:25.355 Malloc10 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1472078 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1472078 /var/tmp/bdevperf.sock 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1472078 ']' 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:25.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.355 { 00:27:25.355 "params": { 00:27:25.355 "name": "Nvme$subsystem", 00:27:25.355 "trtype": "$TEST_TRANSPORT", 00:27:25.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.355 "adrfam": "ipv4", 00:27:25.355 "trsvcid": "$NVMF_PORT", 00:27:25.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.355 "hdgst": ${hdgst:-false}, 00:27:25.355 "ddgst": ${ddgst:-false} 00:27:25.355 }, 00:27:25.355 "method": "bdev_nvme_attach_controller" 00:27:25.355 } 00:27:25.355 EOF 00:27:25.355 )") 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.355 { 00:27:25.355 "params": { 00:27:25.355 "name": "Nvme$subsystem", 00:27:25.355 "trtype": "$TEST_TRANSPORT", 00:27:25.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.355 "adrfam": "ipv4", 00:27:25.355 "trsvcid": "$NVMF_PORT", 00:27:25.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.355 "hdgst": ${hdgst:-false}, 00:27:25.355 "ddgst": ${ddgst:-false} 00:27:25.355 }, 00:27:25.355 "method": "bdev_nvme_attach_controller" 00:27:25.355 } 00:27:25.355 EOF 00:27:25.355 )") 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.355 { 00:27:25.355 "params": { 00:27:25.355 "name": "Nvme$subsystem", 00:27:25.355 "trtype": "$TEST_TRANSPORT", 00:27:25.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.355 "adrfam": "ipv4", 00:27:25.355 "trsvcid": "$NVMF_PORT", 00:27:25.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.355 "hdgst": ${hdgst:-false}, 00:27:25.355 "ddgst": ${ddgst:-false} 00:27:25.355 }, 00:27:25.355 "method": "bdev_nvme_attach_controller" 00:27:25.355 } 00:27:25.355 EOF 00:27:25.355 )") 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.355 { 00:27:25.355 "params": { 00:27:25.355 "name": "Nvme$subsystem", 00:27:25.355 "trtype": "$TEST_TRANSPORT", 00:27:25.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.355 "adrfam": "ipv4", 00:27:25.355 "trsvcid": "$NVMF_PORT", 00:27:25.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.355 "hdgst": ${hdgst:-false}, 00:27:25.355 "ddgst": ${ddgst:-false} 00:27:25.355 }, 00:27:25.355 "method": "bdev_nvme_attach_controller" 00:27:25.355 } 00:27:25.355 EOF 00:27:25.355 )") 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.355 { 00:27:25.355 "params": { 00:27:25.355 "name": "Nvme$subsystem", 00:27:25.355 "trtype": "$TEST_TRANSPORT", 00:27:25.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.355 "adrfam": "ipv4", 00:27:25.355 "trsvcid": "$NVMF_PORT", 00:27:25.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.355 "hdgst": ${hdgst:-false}, 00:27:25.355 "ddgst": ${ddgst:-false} 00:27:25.355 }, 00:27:25.355 "method": "bdev_nvme_attach_controller" 00:27:25.355 } 00:27:25.355 EOF 00:27:25.355 )") 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.355 { 00:27:25.355 "params": { 00:27:25.355 "name": "Nvme$subsystem", 00:27:25.355 "trtype": "$TEST_TRANSPORT", 00:27:25.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.355 "adrfam": "ipv4", 00:27:25.355 "trsvcid": "$NVMF_PORT", 00:27:25.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.355 "hdgst": ${hdgst:-false}, 00:27:25.355 "ddgst": ${ddgst:-false} 00:27:25.355 }, 00:27:25.355 "method": "bdev_nvme_attach_controller" 00:27:25.355 } 00:27:25.355 EOF 00:27:25.355 )") 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.355 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.355 { 00:27:25.355 "params": { 00:27:25.355 "name": "Nvme$subsystem", 00:27:25.355 "trtype": "$TEST_TRANSPORT", 00:27:25.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.355 "adrfam": "ipv4", 00:27:25.355 "trsvcid": "$NVMF_PORT", 00:27:25.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.355 "hdgst": ${hdgst:-false}, 00:27:25.355 "ddgst": ${ddgst:-false} 00:27:25.355 }, 00:27:25.355 "method": "bdev_nvme_attach_controller" 00:27:25.355 } 00:27:25.355 EOF 00:27:25.355 )") 00:27:25.613 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.613 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.613 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.613 { 00:27:25.613 "params": { 00:27:25.613 "name": "Nvme$subsystem", 00:27:25.613 "trtype": "$TEST_TRANSPORT", 00:27:25.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.613 "adrfam": "ipv4", 00:27:25.613 "trsvcid": "$NVMF_PORT", 00:27:25.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.613 "hdgst": ${hdgst:-false}, 00:27:25.613 "ddgst": ${ddgst:-false} 00:27:25.613 }, 00:27:25.613 "method": "bdev_nvme_attach_controller" 00:27:25.613 } 00:27:25.613 EOF 00:27:25.613 )") 00:27:25.613 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.613 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.613 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.613 { 00:27:25.613 "params": { 00:27:25.613 "name": "Nvme$subsystem", 00:27:25.613 "trtype": "$TEST_TRANSPORT", 00:27:25.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.613 "adrfam": "ipv4", 00:27:25.613 "trsvcid": "$NVMF_PORT", 00:27:25.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.613 "hdgst": ${hdgst:-false}, 00:27:25.613 "ddgst": ${ddgst:-false} 00:27:25.613 }, 00:27:25.613 "method": "bdev_nvme_attach_controller" 00:27:25.613 } 00:27:25.613 EOF 00:27:25.613 )") 00:27:25.613 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.613 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.613 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.613 { 00:27:25.613 "params": { 00:27:25.613 "name": "Nvme$subsystem", 00:27:25.613 "trtype": "$TEST_TRANSPORT", 00:27:25.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.613 "adrfam": "ipv4", 00:27:25.613 "trsvcid": "$NVMF_PORT", 00:27:25.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.613 "hdgst": ${hdgst:-false}, 00:27:25.613 "ddgst": ${ddgst:-false} 00:27:25.613 }, 00:27:25.613 "method": "bdev_nvme_attach_controller" 00:27:25.613 } 00:27:25.613 EOF 00:27:25.613 )") 00:27:25.613 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.613 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:25.613 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:25.613 18:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:25.613 "params": { 00:27:25.613 "name": "Nvme1", 00:27:25.613 "trtype": "tcp", 00:27:25.613 "traddr": "10.0.0.2", 00:27:25.613 "adrfam": "ipv4", 00:27:25.613 "trsvcid": "4420", 00:27:25.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:25.613 "hdgst": false, 00:27:25.613 "ddgst": false 00:27:25.613 }, 00:27:25.613 "method": "bdev_nvme_attach_controller" 00:27:25.613 },{ 00:27:25.614 "params": { 00:27:25.614 "name": "Nvme2", 00:27:25.614 "trtype": "tcp", 00:27:25.614 "traddr": "10.0.0.2", 00:27:25.614 "adrfam": "ipv4", 00:27:25.614 "trsvcid": "4420", 00:27:25.614 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:25.614 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:25.614 "hdgst": false, 00:27:25.614 "ddgst": false 00:27:25.614 }, 00:27:25.614 "method": "bdev_nvme_attach_controller" 00:27:25.614 },{ 00:27:25.614 "params": { 00:27:25.614 "name": "Nvme3", 00:27:25.614 "trtype": "tcp", 00:27:25.614 "traddr": "10.0.0.2", 00:27:25.614 "adrfam": "ipv4", 00:27:25.614 "trsvcid": "4420", 00:27:25.614 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:25.614 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:25.614 "hdgst": false, 00:27:25.614 "ddgst": false 00:27:25.614 }, 00:27:25.614 "method": "bdev_nvme_attach_controller" 00:27:25.614 },{ 00:27:25.614 "params": { 00:27:25.614 "name": "Nvme4", 00:27:25.614 "trtype": "tcp", 00:27:25.614 "traddr": "10.0.0.2", 00:27:25.614 "adrfam": "ipv4", 00:27:25.614 "trsvcid": "4420", 00:27:25.614 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:25.614 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:25.614 "hdgst": false, 00:27:25.614 "ddgst": false 00:27:25.614 }, 00:27:25.614 "method": "bdev_nvme_attach_controller" 00:27:25.614 },{ 00:27:25.614 "params": { 00:27:25.614 "name": "Nvme5", 00:27:25.614 "trtype": "tcp", 00:27:25.614 "traddr": "10.0.0.2", 00:27:25.614 "adrfam": "ipv4", 00:27:25.614 "trsvcid": "4420", 00:27:25.614 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:25.614 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:25.614 "hdgst": false, 00:27:25.614 "ddgst": false 00:27:25.614 }, 00:27:25.614 "method": "bdev_nvme_attach_controller" 00:27:25.614 },{ 00:27:25.614 "params": { 00:27:25.614 "name": "Nvme6", 00:27:25.614 "trtype": "tcp", 00:27:25.614 "traddr": "10.0.0.2", 00:27:25.614 "adrfam": "ipv4", 00:27:25.614 "trsvcid": "4420", 00:27:25.614 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:25.614 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:25.614 "hdgst": false, 00:27:25.614 "ddgst": false 00:27:25.614 }, 00:27:25.614 "method": "bdev_nvme_attach_controller" 00:27:25.614 },{ 00:27:25.614 "params": { 00:27:25.614 "name": "Nvme7", 00:27:25.614 "trtype": "tcp", 00:27:25.614 "traddr": "10.0.0.2", 00:27:25.614 "adrfam": "ipv4", 00:27:25.614 "trsvcid": "4420", 00:27:25.614 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:25.614 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:25.614 "hdgst": false, 00:27:25.614 "ddgst": false 00:27:25.614 }, 00:27:25.614 "method": "bdev_nvme_attach_controller" 00:27:25.614 },{ 00:27:25.614 "params": { 00:27:25.614 "name": "Nvme8", 00:27:25.614 "trtype": "tcp", 00:27:25.614 "traddr": "10.0.0.2", 00:27:25.614 "adrfam": "ipv4", 00:27:25.614 "trsvcid": "4420", 00:27:25.614 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:25.614 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:25.614 "hdgst": false, 00:27:25.614 "ddgst": false 00:27:25.614 }, 00:27:25.614 "method": "bdev_nvme_attach_controller" 00:27:25.614 },{ 00:27:25.614 "params": { 00:27:25.614 "name": "Nvme9", 00:27:25.614 "trtype": "tcp", 00:27:25.614 "traddr": "10.0.0.2", 00:27:25.614 "adrfam": "ipv4", 00:27:25.614 "trsvcid": "4420", 00:27:25.614 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:25.614 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:25.614 "hdgst": false, 00:27:25.614 "ddgst": false 00:27:25.614 }, 00:27:25.614 "method": "bdev_nvme_attach_controller" 00:27:25.614 },{ 00:27:25.614 "params": { 00:27:25.614 "name": "Nvme10", 00:27:25.614 "trtype": "tcp", 00:27:25.614 "traddr": "10.0.0.2", 00:27:25.614 "adrfam": "ipv4", 00:27:25.614 "trsvcid": "4420", 00:27:25.614 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:25.614 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:25.614 "hdgst": false, 00:27:25.614 "ddgst": false 00:27:25.614 }, 00:27:25.614 "method": "bdev_nvme_attach_controller" 00:27:25.614 }' 00:27:25.614 [2024-07-20 18:57:35.700441] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:25.614 [2024-07-20 18:57:35.700518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472078 ] 00:27:25.614 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.614 [2024-07-20 18:57:35.763915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.614 [2024-07-20 18:57:35.849551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.512 Running I/O for 10 seconds... 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:27.512 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:27.770 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:27.770 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:27.770 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:27.770 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:27.770 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.770 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:27.770 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.770 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:27.770 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:27.770 18:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1471898 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 1471898 ']' 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 1471898 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1471898 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1471898' 00:27:28.034 killing process with pid 1471898 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 1471898 00:27:28.034 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 1471898 00:27:28.034 [2024-07-20 18:57:38.319562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247b560 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.319651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247b560 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.319666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247b560 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.319678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247b560 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.319691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247b560 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.319703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247b560 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.319723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247b560 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.319735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247b560 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.319756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247b560 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.319768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247b560 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.319814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247b560 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.319829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247b560 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.034 [2024-07-20 18:57:38.321888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.321901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.321914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.321927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.321940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.321952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.321966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.321980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.321993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.322210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2700 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.323801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.323825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.323844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.323857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.323870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.323883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.323897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.323910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.323923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.323936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.323949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.323961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.323974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.323987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.324645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ba00 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.327354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.327387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.327403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.035 [2024-07-20 18:57:38.327416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.327994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.328288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c360 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.036 [2024-07-20 18:57:38.329642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-20 18:57:38.329759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:28.037 the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-20 18:57:38.329783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.329819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.329853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.329852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.329875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.329897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with [2024-07-20 18:57:38.329903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:27:28.037 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.329920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe0300 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.329999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.330015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with [2024-07-20 18:57:38.330021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:27:28.037 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.330039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.330040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.330054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.330063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with [2024-07-20 18:57:38.330068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:27:28.037 id:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.330085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.330087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.330099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.330113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-20 18:57:38.330109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.330129] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcccd0 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.330133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.330156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.330175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-20 18:57:38.330178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:28.037 the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.330197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.330218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.330233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.330248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.330261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.330276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.330290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.330303] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfde190 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.330338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.330347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.330358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.330367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.330372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.330383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.330386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.330397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.330399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.330412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-20 18:57:38.330412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c800 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:28.037 the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.330427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.330442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.330456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.330469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100f150 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.330527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.330548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.330563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.330577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.330596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.330610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.330625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.037 [2024-07-20 18:57:38.330639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.330652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1025e70 is same with the state(5) to be set 00:27:28.037 [2024-07-20 18:57:38.331657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.037 [2024-07-20 18:57:38.331684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.331710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.037 [2024-07-20 18:57:38.331727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.331745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.037 [2024-07-20 18:57:38.331760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.331777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.037 [2024-07-20 18:57:38.331799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.331817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.037 [2024-07-20 18:57:38.331841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.037 [2024-07-20 18:57:38.331857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.037 [2024-07-20 18:57:38.331871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.331888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.331903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.331919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.331934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.331950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.331965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.331981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.331996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.332981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.038 [2024-07-20 18:57:38.332995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.038 [2024-07-20 18:57:38.333012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.333768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.333874] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x112d700 was disconnected and freed. reset controller. 00:27:28.039 [2024-07-20 18:57:38.334481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.334507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.334529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.334546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.334562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.334577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.334594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.334610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.334626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.334641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.334658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.334673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.334690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.334705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.334721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.334736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.334756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.334772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.334788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.334811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.334839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.334855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.334871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.334886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.334902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.334917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.334933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.334953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.334970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.334985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.335002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.335016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.335032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.335054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.335070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.039 [2024-07-20 18:57:38.335084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.039 [2024-07-20 18:57:38.335100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.335969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.335985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.040 [2024-07-20 18:57:38.336474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.040 [2024-07-20 18:57:38.336490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.041 [2024-07-20 18:57:38.336505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.041 [2024-07-20 18:57:38.336521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.041 [2024-07-20 18:57:38.336535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.041 [2024-07-20 18:57:38.336550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.041 [2024-07-20 18:57:38.336564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.041 [2024-07-20 18:57:38.336638] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10ae630 was disconnected and freed. reset controller. 00:27:28.041 [2024-07-20 18:57:38.337554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.337985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338259] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controlle[2024-07-20 18:57:38.338267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with r 00:27:28.041 the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcccd0 (9): Bad file descriptor 00:27:28.041 [2024-07-20 18:57:38.338309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.338361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ccc0 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.339836] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.041 [2024-07-20 18:57:38.339870] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe0300 (9): Bad file descriptor 00:27:28.041 [2024-07-20 18:57:38.339942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.041 [2024-07-20 18:57:38.339965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.041 [2024-07-20 18:57:38.339982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.041 [2024-07-20 18:57:38.339997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.041 [2024-07-20 18:57:38.340011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.041 [2024-07-20 18:57:38.340025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.041 [2024-07-20 18:57:38.340046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.041 [2024-07-20 18:57:38.340059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.041 [2024-07-20 18:57:38.340073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118e120 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.340115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with [2024-07-20 18:57:38.340125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:27:28.041 id:0 cdw10:00000000 cdw11:00000000 00:27:28.041 [2024-07-20 18:57:38.340148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.041 [2024-07-20 18:57:38.340151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.340164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.041 [2024-07-20 18:57:38.340166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.340179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.041 [2024-07-20 18:57:38.340180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.340194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-20 18:57:38.340194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:28.041 the state(5) to be set 00:27:28.041 [2024-07-20 18:57:38.340210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with [2024-07-20 18:57:38.340211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:27:28.041 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.041 [2024-07-20 18:57:38.340227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with [2024-07-20 18:57:38.340228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:27:28.041 id:0 cdw10:00000000 cdw11:00000000 00:27:28.041 [2024-07-20 18:57:38.340242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with [2024-07-20 18:57:38.340243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:27:28.041 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.042 [2024-07-20 18:57:38.340257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with [2024-07-20 18:57:38.340258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b310 is same the state(5) to be set 00:27:28.042 with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfde190 (9): Bad file descriptor 00:27:28.042 [2024-07-20 18:57:38.340298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340319] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100f150 (9): Bad file descriptor 00:27:28.042 [2024-07-20 18:57:38.340324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.042 [2024-07-20 18:57:38.340377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.042 [2024-07-20 18:57:38.340391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-20 18:57:38.340405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:28.042 the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-20 18:57:38.340420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.042 the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with [2024-07-20 18:57:38.340437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:27:28.042 id:0 cdw10:00000000 cdw11:00000000 00:27:28.042 [2024-07-20 18:57:38.340450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with [2024-07-20 18:57:38.340452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:27:28.042 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.042 [2024-07-20 18:57:38.340465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.042 [2024-07-20 18:57:38.340479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.042 [2024-07-20 18:57:38.340492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10045f0 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1025e70 (9): Bad file descriptor 00:27:28.042 [2024-07-20 18:57:38.340531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with [2024-07-20 18:57:38.340633] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:28.042 the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.340989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with [2024-07-20 18:57:38.340985] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:28.042 the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.341005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d160 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.341349] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:28.042 [2024-07-20 18:57:38.341790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.042 [2024-07-20 18:57:38.341825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcccd0 with addr=10.0.0.2, port=4420 00:27:28.042 [2024-07-20 18:57:38.341852] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcccd0 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.342214] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:28.042 [2024-07-20 18:57:38.342927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.042 [2024-07-20 18:57:38.342956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe0300 with addr=10.0.0.2, port=4420 00:27:28.042 [2024-07-20 18:57:38.342973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe0300 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.342993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcccd0 (9): Bad file descriptor 00:27:28.042 [2024-07-20 18:57:38.343058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.042 [2024-07-20 18:57:38.343087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe0300 (9): Bad file descriptor 00:27:28.043 [2024-07-20 18:57:38.343658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with [2024-07-20 18:57:38.343661] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error the state(5) to be set 00:27:28.043 state 00:27:28.043 [2024-07-20 18:57:38.343684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with [2024-07-20 18:57:38.343685] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] contrthe state(5) to be set 00:27:28.043 oller reinitialization failed 00:27:28.043 [2024-07-20 18:57:38.343699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343704] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:28.043 [2024-07-20 18:57:38.343712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.043 [2024-07-20 18:57:38.343789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 18:57:38.343813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.043 the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.043 [2024-07-20 18:57:38.343841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.043 [2024-07-20 18:57:38.343854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.043 [2024-07-20 18:57:38.343880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.043 [2024-07-20 18:57:38.343893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.043 [2024-07-20 18:57:38.343907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 18:57:38.343920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.043 the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.043 [2024-07-20 18:57:38.343949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.043 [2024-07-20 18:57:38.343963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.043 [2024-07-20 18:57:38.343976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.343987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.043 [2024-07-20 18:57:38.343989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with [2024-07-20 18:57:38.344004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128the state(5) to be set 00:27:28.043 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.043 [2024-07-20 18:57:38.344017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.043 [2024-07-20 18:57:38.344030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.043 [2024-07-20 18:57:38.344043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.043 [2024-07-20 18:57:38.344056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128[2024-07-20 18:57:38.344069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.043 the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with [2024-07-20 18:57:38.344084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:28.043 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.043 [2024-07-20 18:57:38.344099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.043 [2024-07-20 18:57:38.344112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.043 [2024-07-20 18:57:38.344131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.043 [2024-07-20 18:57:38.344145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.043 [2024-07-20 18:57:38.344173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12[2024-07-20 18:57:38.344186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.043 the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with [2024-07-20 18:57:38.344201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:28.043 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.043 [2024-07-20 18:57:38.344216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.043 [2024-07-20 18:57:38.344229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.043 [2024-07-20 18:57:38.344242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.043 [2024-07-20 18:57:38.344254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 18:57:38.344267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.043 the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.043 [2024-07-20 18:57:38.344293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.043 [2024-07-20 18:57:38.344298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.344314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.344330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.344346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.344347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.344362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d600 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.344378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.344984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.344999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.345015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.345029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.345045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.345064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.345081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.345095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.345111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:12[2024-07-20 18:57:38.345104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.345130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.345133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.345146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.345149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.345161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.345163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.345177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.345178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.345190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.345194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.345203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.345210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.345216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.345226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.345229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.345242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with [2024-07-20 18:57:38.345242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:12the state(5) to be set 00:27:28.044 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.345257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.345259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.345270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.345276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.345284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.345294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.345297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.044 [2024-07-20 18:57:38.345310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with [2024-07-20 18:57:38.345311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:12the state(5) to be set 00:27:28.044 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.044 [2024-07-20 18:57:38.345325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with [2024-07-20 18:57:38.345326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:28.044 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.044 [2024-07-20 18:57:38.345340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 [2024-07-20 18:57:38.345368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 [2024-07-20 18:57:38.345381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:12[2024-07-20 18:57:38.345394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 18:57:38.345409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 [2024-07-20 18:57:38.345437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 [2024-07-20 18:57:38.345450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 [2024-07-20 18:57:38.345463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 [2024-07-20 18:57:38.345476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with [2024-07-20 18:57:38.345488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:12the state(5) to be set 00:27:28.045 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 [2024-07-20 18:57:38.345505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 [2024-07-20 18:57:38.345518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 [2024-07-20 18:57:38.345531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 [2024-07-20 18:57:38.345544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:12[2024-07-20 18:57:38.345557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 18:57:38.345570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with [2024-07-20 18:57:38.345587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:12the state(5) to be set 00:27:28.045 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 [2024-07-20 18:57:38.345601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with [2024-07-20 18:57:38.345602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:28.045 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 [2024-07-20 18:57:38.345615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 [2024-07-20 18:57:38.345628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 [2024-07-20 18:57:38.345641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 [2024-07-20 18:57:38.345654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 [2024-07-20 18:57:38.345667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:12[2024-07-20 18:57:38.345680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 18:57:38.345700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 [2024-07-20 18:57:38.345728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 [2024-07-20 18:57:38.345741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 [2024-07-20 18:57:38.345754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 [2024-07-20 18:57:38.345767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:12[2024-07-20 18:57:38.345780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 [2024-07-20 18:57:38.345817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with [2024-07-20 18:57:38.345835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:12the state(5) to be set 00:27:28.045 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 [2024-07-20 18:57:38.345850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with [2024-07-20 18:57:38.345852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:28.045 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 [2024-07-20 18:57:38.345865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 [2024-07-20 18:57:38.345879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 [2024-07-20 18:57:38.345892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.045 [2024-07-20 18:57:38.345905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.045 [2024-07-20 18:57:38.345923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd8630 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.345987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.346000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.346002] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfd8630 was disconnected and freed. reset controller. 00:27:28.045 [2024-07-20 18:57:38.346013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247daa0 is same with the state(5) to be set 00:27:28.045 [2024-07-20 18:57:38.346190] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.045 [2024-07-20 18:57:38.346219] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.045 [2024-07-20 18:57:38.346234] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.045 [2024-07-20 18:57:38.346249] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.045 [2024-07-20 18:57:38.347584] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.045 [2024-07-20 18:57:38.347609] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:28.045 [2024-07-20 18:57:38.347632] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118e120 (9): Bad file descriptor 00:27:28.045 [2024-07-20 18:57:38.347723] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:28.046 [2024-07-20 18:57:38.348489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.046 [2024-07-20 18:57:38.348519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118e120 with addr=10.0.0.2, port=4420 00:27:28.046 [2024-07-20 18:57:38.348536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118e120 is same with the state(5) to be set 00:27:28.046 [2024-07-20 18:57:38.348624] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:28.046 [2024-07-20 18:57:38.348715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118e120 (9): Bad file descriptor 00:27:28.046 [2024-07-20 18:57:38.348826] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:28.046 [2024-07-20 18:57:38.348860] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:28.046 [2024-07-20 18:57:38.348878] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:28.046 [2024-07-20 18:57:38.348892] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:28.046 [2024-07-20 18:57:38.348960] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.046 [2024-07-20 18:57:38.349886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118b310 (9): Bad file descriptor 00:27:28.046 [2024-07-20 18:57:38.349932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10045f0 (9): Bad file descriptor 00:27:28.046 [2024-07-20 18:57:38.349990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.046 [2024-07-20 18:57:38.350012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.046 [2024-07-20 18:57:38.350043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.046 [2024-07-20 18:57:38.350071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.046 [2024-07-20 18:57:38.350100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aa3c0 is same with the state(5) to be set 00:27:28.046 [2024-07-20 18:57:38.350158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.046 [2024-07-20 18:57:38.350179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.046 [2024-07-20 18:57:38.350208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.046 [2024-07-20 18:57:38.350236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.046 [2024-07-20 18:57:38.350264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101ba60 is same with the state(5) to be set 00:27:28.046 [2024-07-20 18:57:38.350421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.046 [2024-07-20 18:57:38.350444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.046 [2024-07-20 18:57:38.350482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.046 [2024-07-20 18:57:38.350514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.046 [2024-07-20 18:57:38.350555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.046 [2024-07-20 18:57:38.350586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.046 [2024-07-20 18:57:38.350617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.046 [2024-07-20 18:57:38.350648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.046 [2024-07-20 18:57:38.350679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.046 [2024-07-20 18:57:38.350710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.046 [2024-07-20 18:57:38.350741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.046 [2024-07-20 18:57:38.350772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.046 [2024-07-20 18:57:38.350789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.046 [2024-07-20 18:57:38.350813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.310 [2024-07-20 18:57:38.350830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.310 [2024-07-20 18:57:38.350845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.310 [2024-07-20 18:57:38.350861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.310 [2024-07-20 18:57:38.350877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.310 [2024-07-20 18:57:38.350893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.310 [2024-07-20 18:57:38.350908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.310 [2024-07-20 18:57:38.350924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.350943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.350960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.350975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.350991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.351979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.351994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.352010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.352025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.352042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.352057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.352073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.352088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.352104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.352119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.352139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.352155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.352171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.352185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.352202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.352217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.352233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.352248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.311 [2024-07-20 18:57:38.352266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.311 [2024-07-20 18:57:38.352280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.352297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.352312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.352328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.352342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.352358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.352373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.352390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.352404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.352420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.352435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.352451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.352466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.352481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10af910 is same with the state(5) to be set 00:27:28.312 [2024-07-20 18:57:38.353724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.353748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.353776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.353799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.353818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.353834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.353851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.353866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.353882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.353897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.353914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.353928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.353945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.353960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.353976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.353991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.312 [2024-07-20 18:57:38.354890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.312 [2024-07-20 18:57:38.354905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.354921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.354936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.354952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.354967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.354984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.354998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.355818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.355837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eb40 is same with the state(5) to be set 00:27:28.313 [2024-07-20 18:57:38.357125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.357148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.357169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.357185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.357202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.357218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.357234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.357249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.357265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.357280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.357297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.357311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.357328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.357342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.357358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.357375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.357392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.357407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.357423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.357438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.357455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.357470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.357487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.357502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.357523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.357538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.357555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.313 [2024-07-20 18:57:38.357570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.313 [2024-07-20 18:57:38.357587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.357602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.357618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.357633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.357649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.357664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.357680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.357695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.357712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.357727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.357743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.357758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.357775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.357790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.357814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.357829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.357845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.357860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.357876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.357892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.357909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.357928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.357944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.357959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.357976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.357991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.314 [2024-07-20 18:57:38.358902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.314 [2024-07-20 18:57:38.358918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.358933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.358950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.358964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.358980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.358996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.359012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.359027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.359043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.359057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.359085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.359100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.359116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.359131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.359155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.359170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.359187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.359202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.359217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6fe0 is same with the state(5) to be set 00:27:28.315 [2024-07-20 18:57:38.360895] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:28.315 [2024-07-20 18:57:38.360928] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:28.315 [2024-07-20 18:57:38.360946] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:28.315 [2024-07-20 18:57:38.361085] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11aa3c0 (9): Bad file descriptor 00:27:28.315 [2024-07-20 18:57:38.361122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101ba60 (9): Bad file descriptor 00:27:28.315 [2024-07-20 18:57:38.361154] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.315 [2024-07-20 18:57:38.361258] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:28.315 [2024-07-20 18:57:38.361672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.315 [2024-07-20 18:57:38.361706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfde190 with addr=10.0.0.2, port=4420 00:27:28.315 [2024-07-20 18:57:38.361725] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfde190 is same with the state(5) to be set 00:27:28.315 [2024-07-20 18:57:38.361964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.315 [2024-07-20 18:57:38.361990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100f150 with addr=10.0.0.2, port=4420 00:27:28.315 [2024-07-20 18:57:38.362006] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100f150 is same with the state(5) to be set 00:27:28.315 [2024-07-20 18:57:38.362238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.315 [2024-07-20 18:57:38.362262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1025e70 with addr=10.0.0.2, port=4420 00:27:28.315 [2024-07-20 18:57:38.362279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1025e70 is same with the state(5) to be set 00:27:28.315 [2024-07-20 18:57:38.362874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.362898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.362922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.362938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.362956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.362970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.362987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.363024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.363055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.363096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.363127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.363160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.363191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.363222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.363253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.363285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.363316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.363347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.363377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.363412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.363444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.315 [2024-07-20 18:57:38.363475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.315 [2024-07-20 18:57:38.363490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.363520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.363551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.363583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.363614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.363644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.363675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.363705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.363736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.363767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.363809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.363852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.363883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.363914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.363945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.363976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.363992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.316 [2024-07-20 18:57:38.364837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-20 18:57:38.364853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.364868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.364884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.364899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.364915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.364930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.364945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1130010 is same with the state(5) to be set 00:27:28.317 [2024-07-20 18:57:38.366199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.366970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.366984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-20 18:57:38.367481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-20 18:57:38.367496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.367512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.367527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.367543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.367558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.367574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.367590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.367606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.367620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.367636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.367651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.367667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.367682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.367698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.367713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.367729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.367744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.367760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.367775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.367791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.367859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.367883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.367898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.367915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.367934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.367951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.367966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.367982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.367997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.368013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.368028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.368044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.368059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.368085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.368101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.368118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.368132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.368148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.368163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.368179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.368194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.368210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.368225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.368241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.368256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.368272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.368287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.368303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-20 18:57:38.368318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-20 18:57:38.368338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd9b30 is same with the state(5) to be set 00:27:28.318 [2024-07-20 18:57:38.369875] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.318 [2024-07-20 18:57:38.369907] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:28.318 [2024-07-20 18:57:38.369926] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:28.318 [2024-07-20 18:57:38.369943] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:28.318 [2024-07-20 18:57:38.370341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.318 [2024-07-20 18:57:38.370370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcccd0 with addr=10.0.0.2, port=4420 00:27:28.318 [2024-07-20 18:57:38.370387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcccd0 is same with the state(5) to be set 00:27:28.318 [2024-07-20 18:57:38.370414] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfde190 (9): Bad file descriptor 00:27:28.318 [2024-07-20 18:57:38.370435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100f150 (9): Bad file descriptor 00:27:28.318 [2024-07-20 18:57:38.370453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1025e70 (9): Bad file descriptor 00:27:28.318 [2024-07-20 18:57:38.370839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.318 [2024-07-20 18:57:38.370868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe0300 with addr=10.0.0.2, port=4420 00:27:28.318 [2024-07-20 18:57:38.370884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe0300 is same with the state(5) to be set 00:27:28.318 [2024-07-20 18:57:38.371115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.318 [2024-07-20 18:57:38.371140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118e120 with addr=10.0.0.2, port=4420 00:27:28.318 [2024-07-20 18:57:38.371156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118e120 is same with the state(5) to be set 00:27:28.318 [2024-07-20 18:57:38.371368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.318 [2024-07-20 18:57:38.371393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b310 with addr=10.0.0.2, port=4420 00:27:28.318 [2024-07-20 18:57:38.371408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b310 is same with the state(5) to be set 00:27:28.318 [2024-07-20 18:57:38.371628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.318 [2024-07-20 18:57:38.371653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10045f0 with addr=10.0.0.2, port=4420 00:27:28.318 [2024-07-20 18:57:38.371668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10045f0 is same with the state(5) to be set 00:27:28.318 [2024-07-20 18:57:38.371687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcccd0 (9): Bad file descriptor 00:27:28.318 [2024-07-20 18:57:38.371705] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:28.318 [2024-07-20 18:57:38.371719] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:28.318 [2024-07-20 18:57:38.371735] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:28.318 [2024-07-20 18:57:38.371758] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:28.318 [2024-07-20 18:57:38.371772] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:28.318 [2024-07-20 18:57:38.371786] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:28.318 [2024-07-20 18:57:38.371816] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:28.318 [2024-07-20 18:57:38.371841] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:28.318 [2024-07-20 18:57:38.371856] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:28.318 [2024-07-20 18:57:38.372423] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.318 [2024-07-20 18:57:38.372445] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.318 [2024-07-20 18:57:38.372458] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.318 [2024-07-20 18:57:38.372474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe0300 (9): Bad file descriptor 00:27:28.318 [2024-07-20 18:57:38.372494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118e120 (9): Bad file descriptor 00:27:28.318 [2024-07-20 18:57:38.372512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118b310 (9): Bad file descriptor 00:27:28.318 [2024-07-20 18:57:38.372530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10045f0 (9): Bad file descriptor 00:27:28.318 [2024-07-20 18:57:38.372546] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:28.318 [2024-07-20 18:57:38.372559] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:28.318 [2024-07-20 18:57:38.372572] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:28.319 [2024-07-20 18:57:38.372634] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.319 [2024-07-20 18:57:38.372692] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.319 [2024-07-20 18:57:38.372731] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.319 [2024-07-20 18:57:38.372746] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.319 [2024-07-20 18:57:38.372760] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.319 [2024-07-20 18:57:38.372778] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:28.319 [2024-07-20 18:57:38.372798] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:28.319 [2024-07-20 18:57:38.372814] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:28.319 [2024-07-20 18:57:38.372841] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:28.319 [2024-07-20 18:57:38.372855] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:28.319 [2024-07-20 18:57:38.372868] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:28.319 [2024-07-20 18:57:38.372885] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:28.319 [2024-07-20 18:57:38.372899] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:28.319 [2024-07-20 18:57:38.372912] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:28.319 [2024-07-20 18:57:38.372991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.373970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.373985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-20 18:57:38.374001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-20 18:57:38.374016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.374982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.374997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.375012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.375027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.375043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.375058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.375076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb030 is same with the state(5) to be set 00:27:28.320 [2024-07-20 18:57:38.376333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.376356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.376377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.376393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.376409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.376424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.376440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.376455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.376471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.376485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.376502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.376516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.376533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.376548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.376564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.376578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.376594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.320 [2024-07-20 18:57:38.376609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.320 [2024-07-20 18:57:38.376625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.376640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.376656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.376672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.376688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.376703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.376723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.376738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.376755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.376769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.376786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.376808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.376825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.376840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.376856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.376871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.376887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.376902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.376918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.376932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.376949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.376963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.376979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.376993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.377978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.321 [2024-07-20 18:57:38.377994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.321 [2024-07-20 18:57:38.378008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.322 [2024-07-20 18:57:38.378024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.322 [2024-07-20 18:57:38.378039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.322 [2024-07-20 18:57:38.378055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.322 [2024-07-20 18:57:38.378069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.322 [2024-07-20 18:57:38.378085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.322 [2024-07-20 18:57:38.378099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.322 [2024-07-20 18:57:38.378116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.322 [2024-07-20 18:57:38.378130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.322 [2024-07-20 18:57:38.378147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.322 [2024-07-20 18:57:38.378161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.322 [2024-07-20 18:57:38.378178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.322 [2024-07-20 18:57:38.378193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.322 [2024-07-20 18:57:38.378209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.322 [2024-07-20 18:57:38.378234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.322 [2024-07-20 18:57:38.378249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.322 [2024-07-20 18:57:38.378264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.322 [2024-07-20 18:57:38.378280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.322 [2024-07-20 18:57:38.378294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.322 [2024-07-20 18:57:38.378311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.322 [2024-07-20 18:57:38.378325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.322 [2024-07-20 18:57:38.378341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.322 [2024-07-20 18:57:38.378360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.322 [2024-07-20 18:57:38.378377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.322 [2024-07-20 18:57:38.378391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.322 [2024-07-20 18:57:38.378406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdc330 is same with the state(5) to be set 00:27:28.322 [2024-07-20 18:57:38.379991] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:28.322 [2024-07-20 18:57:38.380021] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:28.322 [2024-07-20 18:57:38.380039] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:28.322 [2024-07-20 18:57:38.380057] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.322 [2024-07-20 18:57:38.380072] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.322 [2024-07-20 18:57:38.380084] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.322 [2024-07-20 18:57:38.380096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.322 [2024-07-20 18:57:38.380210] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:28.322 task offset: 18048 on job bdev=Nvme3n1 fails 00:27:28.322 00:27:28.322 Latency(us) 00:27:28.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.322 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.322 Job: Nvme1n1 ended in about 0.93 seconds with error 00:27:28.322 Verification LBA range: start 0x0 length 0x400 00:27:28.322 Nvme1n1 : 0.93 207.50 12.97 69.17 0.00 228740.55 7087.60 259425.47 00:27:28.322 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.322 Job: Nvme2n1 ended in about 0.94 seconds with error 00:27:28.322 Verification LBA range: start 0x0 length 0x400 00:27:28.322 Nvme2n1 : 0.94 136.24 8.52 68.12 0.00 303745.20 24660.95 240784.12 00:27:28.322 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.322 Job: Nvme3n1 ended in about 0.92 seconds with error 00:27:28.322 Verification LBA range: start 0x0 length 0x400 00:27:28.322 Nvme3n1 : 0.92 138.57 8.66 69.28 0.00 292443.02 8009.96 338651.21 00:27:28.322 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.322 Job: Nvme4n1 ended in about 0.94 seconds with error 00:27:28.322 Verification LBA range: start 0x0 length 0x400 00:27:28.322 Nvme4n1 : 0.94 203.64 12.73 67.88 0.00 219451.54 22816.24 256318.58 00:27:28.322 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.322 Job: Nvme5n1 ended in about 0.95 seconds with error 00:27:28.322 Verification LBA range: start 0x0 length 0x400 00:27:28.322 Nvme5n1 : 0.95 201.70 12.61 67.23 0.00 217111.89 22719.15 208161.75 00:27:28.322 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.322 Job: Nvme6n1 ended in about 0.93 seconds with error 00:27:28.322 Verification LBA range: start 0x0 length 0x400 00:27:28.322 Nvme6n1 : 0.93 205.72 12.86 68.57 0.00 208040.11 12379.02 250104.79 00:27:28.322 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.322 Job: Nvme7n1 ended in about 0.96 seconds with error 00:27:28.322 Verification LBA range: start 0x0 length 0x400 00:27:28.322 Nvme7n1 : 0.96 133.99 8.37 66.99 0.00 278910.80 25826.04 285834.05 00:27:28.322 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.322 Job: Nvme8n1 ended in about 0.96 seconds with error 00:27:28.322 Verification LBA range: start 0x0 length 0x400 00:27:28.322 Nvme8n1 : 0.96 133.05 8.32 66.53 0.00 275250.19 41360.50 254765.13 00:27:28.322 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.322 Job: Nvme9n1 ended in about 0.97 seconds with error 00:27:28.322 Verification LBA range: start 0x0 length 0x400 00:27:28.322 Nvme9n1 : 0.97 132.60 8.29 66.30 0.00 270662.16 24466.77 248551.35 00:27:28.322 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.322 Job: Nvme10n1 ended in about 0.95 seconds with error 00:27:28.322 Verification LBA range: start 0x0 length 0x400 00:27:28.322 Nvme10n1 : 0.95 135.28 8.45 67.64 0.00 258468.98 24272.59 267192.70 00:27:28.322 =================================================================================================================== 00:27:28.322 Total : 1628.29 101.77 677.72 0.00 250935.81 7087.60 338651.21 00:27:28.322 [2024-07-20 18:57:38.406604] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:28.322 [2024-07-20 18:57:38.406687] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:28.322 [2024-07-20 18:57:38.407164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-07-20 18:57:38.407202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1025e70 with addr=10.0.0.2, port=4420 00:27:28.322 [2024-07-20 18:57:38.407223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1025e70 is same with the state(5) to be set 00:27:28.322 [2024-07-20 18:57:38.407443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-07-20 18:57:38.407473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100f150 with addr=10.0.0.2, port=4420 00:27:28.322 [2024-07-20 18:57:38.407490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100f150 is same with the state(5) to be set 00:27:28.322 [2024-07-20 18:57:38.407710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-07-20 18:57:38.407738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfde190 with addr=10.0.0.2, port=4420 00:27:28.322 [2024-07-20 18:57:38.407755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfde190 is same with the state(5) to be set 00:27:28.322 [2024-07-20 18:57:38.408617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-07-20 18:57:38.408648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11aa3c0 with addr=10.0.0.2, port=4420 00:27:28.322 [2024-07-20 18:57:38.408665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aa3c0 is same with the state(5) to be set 00:27:28.322 [2024-07-20 18:57:38.408879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-07-20 18:57:38.408907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101ba60 with addr=10.0.0.2, port=4420 00:27:28.322 [2024-07-20 18:57:38.408924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101ba60 is same with the state(5) to be set 00:27:28.322 [2024-07-20 18:57:38.408950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1025e70 (9): Bad file descriptor 00:27:28.322 [2024-07-20 18:57:38.408974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100f150 (9): Bad file descriptor 00:27:28.322 [2024-07-20 18:57:38.408994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfde190 (9): Bad file descriptor 00:27:28.322 [2024-07-20 18:57:38.409054] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.322 [2024-07-20 18:57:38.409079] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.322 [2024-07-20 18:57:38.409099] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.322 [2024-07-20 18:57:38.409131] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.322 [2024-07-20 18:57:38.409151] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.322 [2024-07-20 18:57:38.409169] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.322 [2024-07-20 18:57:38.409187] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.322 [2024-07-20 18:57:38.409865] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:28.322 [2024-07-20 18:57:38.409896] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:28.322 [2024-07-20 18:57:38.409915] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:28.322 [2024-07-20 18:57:38.409932] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.322 [2024-07-20 18:57:38.410007] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11aa3c0 (9): Bad file descriptor 00:27:28.323 [2024-07-20 18:57:38.410033] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101ba60 (9): Bad file descriptor 00:27:28.323 [2024-07-20 18:57:38.410051] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:28.323 [2024-07-20 18:57:38.410065] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:28.323 [2024-07-20 18:57:38.410082] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:28.323 [2024-07-20 18:57:38.410100] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:28.323 [2024-07-20 18:57:38.410115] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:28.323 [2024-07-20 18:57:38.410129] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:28.323 [2024-07-20 18:57:38.410145] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:28.323 [2024-07-20 18:57:38.410160] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:28.323 [2024-07-20 18:57:38.410174] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:28.323 [2024-07-20 18:57:38.410245] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:28.323 [2024-07-20 18:57:38.410269] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.323 [2024-07-20 18:57:38.410283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.323 [2024-07-20 18:57:38.410295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.323 [2024-07-20 18:57:38.410507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-07-20 18:57:38.410535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10045f0 with addr=10.0.0.2, port=4420 00:27:28.323 [2024-07-20 18:57:38.410552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10045f0 is same with the state(5) to be set 00:27:28.323 [2024-07-20 18:57:38.410806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-07-20 18:57:38.410834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b310 with addr=10.0.0.2, port=4420 00:27:28.323 [2024-07-20 18:57:38.410851] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b310 is same with the state(5) to be set 00:27:28.323 [2024-07-20 18:57:38.411045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-07-20 18:57:38.411076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118e120 with addr=10.0.0.2, port=4420 00:27:28.323 [2024-07-20 18:57:38.411094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118e120 is same with the state(5) to be set 00:27:28.323 [2024-07-20 18:57:38.411292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-07-20 18:57:38.411318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe0300 with addr=10.0.0.2, port=4420 00:27:28.323 [2024-07-20 18:57:38.411335] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe0300 is same with the state(5) to be set 00:27:28.323 [2024-07-20 18:57:38.411350] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:28.323 [2024-07-20 18:57:38.411364] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:28.323 [2024-07-20 18:57:38.411378] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:28.323 [2024-07-20 18:57:38.411396] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:28.323 [2024-07-20 18:57:38.411410] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:28.323 [2024-07-20 18:57:38.411424] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:28.323 [2024-07-20 18:57:38.411464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.323 [2024-07-20 18:57:38.411482] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.323 [2024-07-20 18:57:38.411888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-07-20 18:57:38.411916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcccd0 with addr=10.0.0.2, port=4420 00:27:28.323 [2024-07-20 18:57:38.411932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcccd0 is same with the state(5) to be set 00:27:28.323 [2024-07-20 18:57:38.411952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10045f0 (9): Bad file descriptor 00:27:28.323 [2024-07-20 18:57:38.411971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118b310 (9): Bad file descriptor 00:27:28.323 [2024-07-20 18:57:38.411989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118e120 (9): Bad file descriptor 00:27:28.323 [2024-07-20 18:57:38.412008] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe0300 (9): Bad file descriptor 00:27:28.323 [2024-07-20 18:57:38.412051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcccd0 (9): Bad file descriptor 00:27:28.323 [2024-07-20 18:57:38.412072] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:28.323 [2024-07-20 18:57:38.412085] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:28.323 [2024-07-20 18:57:38.412099] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:28.323 [2024-07-20 18:57:38.412116] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:28.323 [2024-07-20 18:57:38.412130] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:28.323 [2024-07-20 18:57:38.412144] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:28.323 [2024-07-20 18:57:38.412160] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:28.323 [2024-07-20 18:57:38.412174] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:28.323 [2024-07-20 18:57:38.412187] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:28.323 [2024-07-20 18:57:38.412208] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.323 [2024-07-20 18:57:38.412223] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.323 [2024-07-20 18:57:38.412237] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.323 [2024-07-20 18:57:38.412273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.323 [2024-07-20 18:57:38.412290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.323 [2024-07-20 18:57:38.412303] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.323 [2024-07-20 18:57:38.412315] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.323 [2024-07-20 18:57:38.412327] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:28.323 [2024-07-20 18:57:38.412340] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:28.323 [2024-07-20 18:57:38.412354] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:28.323 [2024-07-20 18:57:38.412391] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.581 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:28.581 18:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1472078 00:27:29.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1472078) - No such process 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:29.957 rmmod nvme_tcp 00:27:29.957 rmmod nvme_fabrics 00:27:29.957 rmmod nvme_keyring 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.957 18:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.859 18:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:31.859 00:27:31.859 real 0m7.368s 00:27:31.859 user 0m17.274s 00:27:31.859 sys 0m1.518s 00:27:31.859 18:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:31.859 18:57:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:31.859 ************************************ 00:27:31.859 END TEST nvmf_shutdown_tc3 00:27:31.859 ************************************ 00:27:31.859 18:57:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:31.859 00:27:31.859 real 0m27.198s 00:27:31.859 user 1m14.553s 00:27:31.859 sys 0m6.452s 00:27:31.859 18:57:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:31.859 18:57:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:31.859 ************************************ 00:27:31.859 END TEST nvmf_shutdown 00:27:31.859 ************************************ 00:27:31.859 18:57:42 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:31.859 18:57:42 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:31.859 18:57:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:31.859 18:57:42 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:31.859 18:57:42 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:31.859 18:57:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:31.859 18:57:42 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:31.859 18:57:42 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:31.859 18:57:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:31.859 18:57:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:31.859 18:57:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:31.859 ************************************ 00:27:31.859 START TEST nvmf_multicontroller 00:27:31.859 ************************************ 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:31.859 * Looking for test storage... 00:27:31.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:31.859 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.860 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:31.860 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:31.860 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:31.860 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.860 18:57:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.860 18:57:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.860 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:31.860 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:31.860 18:57:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:31.860 18:57:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:34.383 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:34.383 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:34.383 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:34.383 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.383 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:34.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:27:34.383 00:27:34.383 --- 10.0.0.2 ping statistics --- 00:27:34.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.384 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:27:34.384 00:27:34.384 --- 10.0.0.1 ping statistics --- 00:27:34.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.384 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1474474 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1474474 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1474474 ']' 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.384 [2024-07-20 18:57:44.342493] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:34.384 [2024-07-20 18:57:44.342590] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.384 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.384 [2024-07-20 18:57:44.415288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:34.384 [2024-07-20 18:57:44.509216] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.384 [2024-07-20 18:57:44.509263] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.384 [2024-07-20 18:57:44.509288] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.384 [2024-07-20 18:57:44.509301] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.384 [2024-07-20 18:57:44.509329] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.384 [2024-07-20 18:57:44.509457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.384 [2024-07-20 18:57:44.510191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.384 [2024-07-20 18:57:44.510196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.384 [2024-07-20 18:57:44.642907] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.384 Malloc0 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.384 [2024-07-20 18:57:44.699282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.384 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.641 [2024-07-20 18:57:44.707173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.641 Malloc1 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1474622 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1474622 /var/tmp/bdevperf.sock 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1474622 ']' 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:34.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:34.641 18:57:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.898 NVMe0n1 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.898 1 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.898 request: 00:27:34.898 { 00:27:34.898 "name": "NVMe0", 00:27:34.898 "trtype": "tcp", 00:27:34.898 "traddr": "10.0.0.2", 00:27:34.898 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:34.898 "hostaddr": "10.0.0.2", 00:27:34.898 "hostsvcid": "60000", 00:27:34.898 "adrfam": "ipv4", 00:27:34.898 "trsvcid": "4420", 00:27:34.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.898 "method": "bdev_nvme_attach_controller", 00:27:34.898 "req_id": 1 00:27:34.898 } 00:27:34.898 Got JSON-RPC error response 00:27:34.898 response: 00:27:34.898 { 00:27:34.898 "code": -114, 00:27:34.898 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:34.898 } 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:34.898 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.899 request: 00:27:34.899 { 00:27:34.899 "name": "NVMe0", 00:27:34.899 "trtype": "tcp", 00:27:34.899 "traddr": "10.0.0.2", 00:27:34.899 "hostaddr": "10.0.0.2", 00:27:34.899 "hostsvcid": "60000", 00:27:34.899 "adrfam": "ipv4", 00:27:34.899 "trsvcid": "4420", 00:27:34.899 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:34.899 "method": "bdev_nvme_attach_controller", 00:27:34.899 "req_id": 1 00:27:34.899 } 00:27:34.899 Got JSON-RPC error response 00:27:34.899 response: 00:27:34.899 { 00:27:34.899 "code": -114, 00:27:34.899 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:34.899 } 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:34.899 request: 00:27:34.899 { 00:27:34.899 "name": "NVMe0", 00:27:34.899 "trtype": "tcp", 00:27:34.899 "traddr": "10.0.0.2", 00:27:34.899 "hostaddr": "10.0.0.2", 00:27:34.899 "hostsvcid": "60000", 00:27:34.899 "adrfam": "ipv4", 00:27:34.899 "trsvcid": "4420", 00:27:34.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.899 "multipath": "disable", 00:27:34.899 "method": "bdev_nvme_attach_controller", 00:27:34.899 "req_id": 1 00:27:34.899 } 00:27:34.899 Got JSON-RPC error response 00:27:34.899 response: 00:27:34.899 { 00:27:34.899 "code": -114, 00:27:34.899 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:34.899 } 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.899 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:35.155 request: 00:27:35.155 { 00:27:35.155 "name": "NVMe0", 00:27:35.155 "trtype": "tcp", 00:27:35.155 "traddr": "10.0.0.2", 00:27:35.155 "hostaddr": "10.0.0.2", 00:27:35.155 "hostsvcid": "60000", 00:27:35.155 "adrfam": "ipv4", 00:27:35.155 "trsvcid": "4420", 00:27:35.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.155 "multipath": "failover", 00:27:35.155 "method": "bdev_nvme_attach_controller", 00:27:35.155 "req_id": 1 00:27:35.155 } 00:27:35.155 Got JSON-RPC error response 00:27:35.155 response: 00:27:35.155 { 00:27:35.155 "code": -114, 00:27:35.155 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:35.155 } 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:35.155 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:35.155 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:35.155 18:57:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:36.526 0 00:27:36.526 18:57:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:36.526 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.526 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:36.526 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.526 18:57:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1474622 00:27:36.526 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1474622 ']' 00:27:36.526 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1474622 00:27:36.526 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:27:36.526 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:36.526 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1474622 00:27:36.526 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:36.526 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:36.526 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1474622' 00:27:36.526 killing process with pid 1474622 00:27:36.526 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1474622 00:27:36.526 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1474622 00:27:36.527 18:57:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:36.527 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.527 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:36.527 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.527 18:57:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:36.527 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.527 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:36.527 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.527 18:57:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:36.527 18:57:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:36.527 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:27:36.527 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:36.527 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:27:36.785 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:27:36.785 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:36.785 [2024-07-20 18:57:44.805088] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:36.785 [2024-07-20 18:57:44.805203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474622 ] 00:27:36.785 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.785 [2024-07-20 18:57:44.866299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.785 [2024-07-20 18:57:44.951678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.785 [2024-07-20 18:57:45.446966] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 26c130dd-5198-470d-804d-dcdc6c892e17 already exists 00:27:36.785 [2024-07-20 18:57:45.447008] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:26c130dd-5198-470d-804d-dcdc6c892e17 alias for bdev NVMe1n1 00:27:36.785 [2024-07-20 18:57:45.447027] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:36.785 Running I/O for 1 seconds... 00:27:36.785 00:27:36.785 Latency(us) 00:27:36.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.785 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:36.785 NVMe0n1 : 1.01 19047.43 74.40 0.00 0.00 6702.49 2827.76 11602.30 00:27:36.785 =================================================================================================================== 00:27:36.785 Total : 19047.43 74.40 0.00 0.00 6702.49 2827.76 11602.30 00:27:36.785 Received shutdown signal, test time was about 1.000000 seconds 00:27:36.785 00:27:36.785 Latency(us) 00:27:36.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.785 =================================================================================================================== 00:27:36.785 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:36.785 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:36.785 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:36.785 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:27:36.785 18:57:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:36.785 18:57:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:36.785 18:57:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:36.785 18:57:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.785 18:57:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:36.785 18:57:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.785 18:57:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.785 rmmod nvme_tcp 00:27:36.785 rmmod nvme_fabrics 00:27:36.785 rmmod nvme_keyring 00:27:36.785 18:57:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.785 18:57:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:36.785 18:57:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:36.786 18:57:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1474474 ']' 00:27:36.786 18:57:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1474474 00:27:36.786 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1474474 ']' 00:27:36.786 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1474474 00:27:36.786 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:27:36.786 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:36.786 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1474474 00:27:36.786 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:36.786 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:36.786 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1474474' 00:27:36.786 killing process with pid 1474474 00:27:36.786 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1474474 00:27:36.786 18:57:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1474474 00:27:37.044 18:57:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:37.045 18:57:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:37.045 18:57:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:37.045 18:57:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:37.045 18:57:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:37.045 18:57:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.045 18:57:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.045 18:57:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.946 18:57:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:38.946 00:27:38.946 real 0m7.156s 00:27:38.946 user 0m10.703s 00:27:38.946 sys 0m2.289s 00:27:38.946 18:57:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:38.946 18:57:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:38.946 ************************************ 00:27:38.946 END TEST nvmf_multicontroller 00:27:38.946 ************************************ 00:27:38.946 18:57:49 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:38.946 18:57:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:38.946 18:57:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:38.946 18:57:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:39.205 ************************************ 00:27:39.205 START TEST nvmf_aer 00:27:39.205 ************************************ 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:39.205 * Looking for test storage... 00:27:39.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:39.205 18:57:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:41.119 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:41.119 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:41.119 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.119 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:41.120 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.120 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:41.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:27:41.378 00:27:41.378 --- 10.0.0.2 ping statistics --- 00:27:41.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.378 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:27:41.378 00:27:41.378 --- 10.0.0.1 ping statistics --- 00:27:41.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.378 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1476824 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1476824 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 1476824 ']' 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:41.378 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.378 [2024-07-20 18:57:51.593182] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:41.378 [2024-07-20 18:57:51.593282] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.378 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.378 [2024-07-20 18:57:51.663161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:41.637 [2024-07-20 18:57:51.755057] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.637 [2024-07-20 18:57:51.755115] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.637 [2024-07-20 18:57:51.755142] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.637 [2024-07-20 18:57:51.755156] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.637 [2024-07-20 18:57:51.755173] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.637 [2024-07-20 18:57:51.755256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.637 [2024-07-20 18:57:51.755337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:41.637 [2024-07-20 18:57:51.755426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:41.637 [2024-07-20 18:57:51.755428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.637 [2024-07-20 18:57:51.911660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.637 Malloc0 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.637 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.895 [2024-07-20 18:57:51.965438] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.895 [ 00:27:41.895 { 00:27:41.895 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:41.895 "subtype": "Discovery", 00:27:41.895 "listen_addresses": [], 00:27:41.895 "allow_any_host": true, 00:27:41.895 "hosts": [] 00:27:41.895 }, 00:27:41.895 { 00:27:41.895 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:41.895 "subtype": "NVMe", 00:27:41.895 "listen_addresses": [ 00:27:41.895 { 00:27:41.895 "trtype": "TCP", 00:27:41.895 "adrfam": "IPv4", 00:27:41.895 "traddr": "10.0.0.2", 00:27:41.895 "trsvcid": "4420" 00:27:41.895 } 00:27:41.895 ], 00:27:41.895 "allow_any_host": true, 00:27:41.895 "hosts": [], 00:27:41.895 "serial_number": "SPDK00000000000001", 00:27:41.895 "model_number": "SPDK bdev Controller", 00:27:41.895 "max_namespaces": 2, 00:27:41.895 "min_cntlid": 1, 00:27:41.895 "max_cntlid": 65519, 00:27:41.895 "namespaces": [ 00:27:41.895 { 00:27:41.895 "nsid": 1, 00:27:41.895 "bdev_name": "Malloc0", 00:27:41.895 "name": "Malloc0", 00:27:41.895 "nguid": "2470D1083F1842618E0992891AF3452E", 00:27:41.895 "uuid": "2470d108-3f18-4261-8e09-92891af3452e" 00:27:41.895 } 00:27:41.895 ] 00:27:41.895 } 00:27:41.895 ] 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1476853 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:27:41.895 18:57:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:27:41.895 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.895 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:41.895 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:27:41.895 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:27:41.895 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:27:41.895 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:41.895 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:41.895 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:27:41.895 18:57:52 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:41.895 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.895 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:42.153 Malloc1 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:42.153 [ 00:27:42.153 { 00:27:42.153 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:42.153 "subtype": "Discovery", 00:27:42.153 "listen_addresses": [], 00:27:42.153 "allow_any_host": true, 00:27:42.153 "hosts": [] 00:27:42.153 }, 00:27:42.153 { 00:27:42.153 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:42.153 "subtype": "NVMe", 00:27:42.153 "listen_addresses": [ 00:27:42.153 { 00:27:42.153 "trtype": "TCP", 00:27:42.153 "adrfam": "IPv4", 00:27:42.153 "traddr": "10.0.0.2", 00:27:42.153 "trsvcid": "4420" 00:27:42.153 } 00:27:42.153 ], 00:27:42.153 "allow_any_host": true, 00:27:42.153 "hosts": [], 00:27:42.153 "serial_number": "SPDK00000000000001", 00:27:42.153 "model_number": "SPDK bdev Controller", 00:27:42.153 "max_namespaces": 2, 00:27:42.153 "min_cntlid": 1, 00:27:42.153 "max_cntlid": 65519, 00:27:42.153 "namespaces": [ 00:27:42.153 { 00:27:42.153 "nsid": 1, 00:27:42.153 "bdev_name": "Malloc0", 00:27:42.153 "name": "Malloc0", 00:27:42.153 "nguid": "2470D1083F1842618E0992891AF3452E", 00:27:42.153 "uuid": "2470d108-3f18-4261-8e09-92891af3452e" 00:27:42.153 }, 00:27:42.153 { 00:27:42.153 "nsid": 2, 00:27:42.153 "bdev_name": "Malloc1", 00:27:42.153 "name": "Malloc1", 00:27:42.153 "nguid": "119955E25F504A349C20FD48E5915068", 00:27:42.153 "uuid": "119955e2-5f50-4a34-9c20-fd48e5915068" 00:27:42.153 } 00:27:42.153 ] 00:27:42.153 } 00:27:42.153 ] 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1476853 00:27:42.153 Asynchronous Event Request test 00:27:42.153 Attaching to 10.0.0.2 00:27:42.153 Attached to 10.0.0.2 00:27:42.153 Registering asynchronous event callbacks... 00:27:42.153 Starting namespace attribute notice tests for all controllers... 00:27:42.153 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:42.153 aer_cb - Changed Namespace 00:27:42.153 Cleaning up... 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:42.153 rmmod nvme_tcp 00:27:42.153 rmmod nvme_fabrics 00:27:42.153 rmmod nvme_keyring 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1476824 ']' 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1476824 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 1476824 ']' 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 1476824 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:27:42.153 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:42.154 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1476824 00:27:42.154 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:42.154 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:42.154 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1476824' 00:27:42.154 killing process with pid 1476824 00:27:42.154 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 1476824 00:27:42.154 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 1476824 00:27:42.412 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:42.412 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:42.412 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:42.412 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:42.412 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:42.412 18:57:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.412 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:42.412 18:57:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.951 18:57:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:44.951 00:27:44.951 real 0m5.386s 00:27:44.951 user 0m4.149s 00:27:44.951 sys 0m1.954s 00:27:44.951 18:57:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:44.951 18:57:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:44.951 ************************************ 00:27:44.951 END TEST nvmf_aer 00:27:44.951 ************************************ 00:27:44.951 18:57:54 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:44.951 18:57:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:44.951 18:57:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:44.951 18:57:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:44.951 ************************************ 00:27:44.951 START TEST nvmf_async_init 00:27:44.951 ************************************ 00:27:44.951 18:57:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:44.951 * Looking for test storage... 00:27:44.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:44.951 18:57:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.951 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:44.951 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.951 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.951 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.951 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.951 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.951 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.951 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.951 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=9c0565c23a804a18842cccaff8900c69 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:44.952 18:57:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:46.851 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:46.851 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:46.851 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.851 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:46.852 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:46.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:27:46.852 00:27:46.852 --- 10.0.0.2 ping statistics --- 00:27:46.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.852 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:27:46.852 00:27:46.852 --- 10.0.0.1 ping statistics --- 00:27:46.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.852 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1478810 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1478810 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 1478810 ']' 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:46.852 18:57:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.852 [2024-07-20 18:57:57.038133] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:46.852 [2024-07-20 18:57:57.038223] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.852 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.852 [2024-07-20 18:57:57.103546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.111 [2024-07-20 18:57:57.188853] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.111 [2024-07-20 18:57:57.188906] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.111 [2024-07-20 18:57:57.188931] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.111 [2024-07-20 18:57:57.188945] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.111 [2024-07-20 18:57:57.188957] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.111 [2024-07-20 18:57:57.189005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.111 [2024-07-20 18:57:57.339643] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.111 null0 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9c0565c23a804a18842cccaff8900c69 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.111 [2024-07-20 18:57:57.379972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.111 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.368 nvme0n1 00:27:47.368 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.368 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:47.368 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.368 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.368 [ 00:27:47.368 { 00:27:47.368 "name": "nvme0n1", 00:27:47.368 "aliases": [ 00:27:47.368 "9c0565c2-3a80-4a18-842c-ccaff8900c69" 00:27:47.368 ], 00:27:47.368 "product_name": "NVMe disk", 00:27:47.368 "block_size": 512, 00:27:47.368 "num_blocks": 2097152, 00:27:47.368 "uuid": "9c0565c2-3a80-4a18-842c-ccaff8900c69", 00:27:47.368 "assigned_rate_limits": { 00:27:47.368 "rw_ios_per_sec": 0, 00:27:47.368 "rw_mbytes_per_sec": 0, 00:27:47.368 "r_mbytes_per_sec": 0, 00:27:47.368 "w_mbytes_per_sec": 0 00:27:47.368 }, 00:27:47.368 "claimed": false, 00:27:47.368 "zoned": false, 00:27:47.368 "supported_io_types": { 00:27:47.368 "read": true, 00:27:47.368 "write": true, 00:27:47.368 "unmap": false, 00:27:47.368 "write_zeroes": true, 00:27:47.368 "flush": true, 00:27:47.368 "reset": true, 00:27:47.368 "compare": true, 00:27:47.368 "compare_and_write": true, 00:27:47.368 "abort": true, 00:27:47.368 "nvme_admin": true, 00:27:47.368 "nvme_io": true 00:27:47.368 }, 00:27:47.368 "memory_domains": [ 00:27:47.368 { 00:27:47.368 "dma_device_id": "system", 00:27:47.368 "dma_device_type": 1 00:27:47.368 } 00:27:47.368 ], 00:27:47.368 "driver_specific": { 00:27:47.368 "nvme": [ 00:27:47.368 { 00:27:47.368 "trid": { 00:27:47.368 "trtype": "TCP", 00:27:47.368 "adrfam": "IPv4", 00:27:47.368 "traddr": "10.0.0.2", 00:27:47.368 "trsvcid": "4420", 00:27:47.368 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:47.368 }, 00:27:47.368 "ctrlr_data": { 00:27:47.368 "cntlid": 1, 00:27:47.368 "vendor_id": "0x8086", 00:27:47.368 "model_number": "SPDK bdev Controller", 00:27:47.368 "serial_number": "00000000000000000000", 00:27:47.368 "firmware_revision": "24.05.1", 00:27:47.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:47.368 "oacs": { 00:27:47.368 "security": 0, 00:27:47.368 "format": 0, 00:27:47.368 "firmware": 0, 00:27:47.368 "ns_manage": 0 00:27:47.368 }, 00:27:47.368 "multi_ctrlr": true, 00:27:47.368 "ana_reporting": false 00:27:47.368 }, 00:27:47.368 "vs": { 00:27:47.368 "nvme_version": "1.3" 00:27:47.368 }, 00:27:47.368 "ns_data": { 00:27:47.368 "id": 1, 00:27:47.368 "can_share": true 00:27:47.368 } 00:27:47.368 } 00:27:47.368 ], 00:27:47.368 "mp_policy": "active_passive" 00:27:47.368 } 00:27:47.368 } 00:27:47.368 ] 00:27:47.368 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.368 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:47.368 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.368 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.368 [2024-07-20 18:57:57.628484] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:47.368 [2024-07-20 18:57:57.628573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d0760 (9): Bad file descriptor 00:27:47.626 [2024-07-20 18:57:57.760960] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.626 [ 00:27:47.626 { 00:27:47.626 "name": "nvme0n1", 00:27:47.626 "aliases": [ 00:27:47.626 "9c0565c2-3a80-4a18-842c-ccaff8900c69" 00:27:47.626 ], 00:27:47.626 "product_name": "NVMe disk", 00:27:47.626 "block_size": 512, 00:27:47.626 "num_blocks": 2097152, 00:27:47.626 "uuid": "9c0565c2-3a80-4a18-842c-ccaff8900c69", 00:27:47.626 "assigned_rate_limits": { 00:27:47.626 "rw_ios_per_sec": 0, 00:27:47.626 "rw_mbytes_per_sec": 0, 00:27:47.626 "r_mbytes_per_sec": 0, 00:27:47.626 "w_mbytes_per_sec": 0 00:27:47.626 }, 00:27:47.626 "claimed": false, 00:27:47.626 "zoned": false, 00:27:47.626 "supported_io_types": { 00:27:47.626 "read": true, 00:27:47.626 "write": true, 00:27:47.626 "unmap": false, 00:27:47.626 "write_zeroes": true, 00:27:47.626 "flush": true, 00:27:47.626 "reset": true, 00:27:47.626 "compare": true, 00:27:47.626 "compare_and_write": true, 00:27:47.626 "abort": true, 00:27:47.626 "nvme_admin": true, 00:27:47.626 "nvme_io": true 00:27:47.626 }, 00:27:47.626 "memory_domains": [ 00:27:47.626 { 00:27:47.626 "dma_device_id": "system", 00:27:47.626 "dma_device_type": 1 00:27:47.626 } 00:27:47.626 ], 00:27:47.626 "driver_specific": { 00:27:47.626 "nvme": [ 00:27:47.626 { 00:27:47.626 "trid": { 00:27:47.626 "trtype": "TCP", 00:27:47.626 "adrfam": "IPv4", 00:27:47.626 "traddr": "10.0.0.2", 00:27:47.626 "trsvcid": "4420", 00:27:47.626 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:47.626 }, 00:27:47.626 "ctrlr_data": { 00:27:47.626 "cntlid": 2, 00:27:47.626 "vendor_id": "0x8086", 00:27:47.626 "model_number": "SPDK bdev Controller", 00:27:47.626 "serial_number": "00000000000000000000", 00:27:47.626 "firmware_revision": "24.05.1", 00:27:47.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:47.626 "oacs": { 00:27:47.626 "security": 0, 00:27:47.626 "format": 0, 00:27:47.626 "firmware": 0, 00:27:47.626 "ns_manage": 0 00:27:47.626 }, 00:27:47.626 "multi_ctrlr": true, 00:27:47.626 "ana_reporting": false 00:27:47.626 }, 00:27:47.626 "vs": { 00:27:47.626 "nvme_version": "1.3" 00:27:47.626 }, 00:27:47.626 "ns_data": { 00:27:47.626 "id": 1, 00:27:47.626 "can_share": true 00:27:47.626 } 00:27:47.626 } 00:27:47.626 ], 00:27:47.626 "mp_policy": "active_passive" 00:27:47.626 } 00:27:47.626 } 00:27:47.626 ] 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.XsB2T9ntDn 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.XsB2T9ntDn 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.626 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.627 [2024-07-20 18:57:57.809141] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:47.627 [2024-07-20 18:57:57.809275] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XsB2T9ntDn 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.627 [2024-07-20 18:57:57.817181] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XsB2T9ntDn 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.627 [2024-07-20 18:57:57.825178] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:47.627 [2024-07-20 18:57:57.825238] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:47.627 nvme0n1 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.627 [ 00:27:47.627 { 00:27:47.627 "name": "nvme0n1", 00:27:47.627 "aliases": [ 00:27:47.627 "9c0565c2-3a80-4a18-842c-ccaff8900c69" 00:27:47.627 ], 00:27:47.627 "product_name": "NVMe disk", 00:27:47.627 "block_size": 512, 00:27:47.627 "num_blocks": 2097152, 00:27:47.627 "uuid": "9c0565c2-3a80-4a18-842c-ccaff8900c69", 00:27:47.627 "assigned_rate_limits": { 00:27:47.627 "rw_ios_per_sec": 0, 00:27:47.627 "rw_mbytes_per_sec": 0, 00:27:47.627 "r_mbytes_per_sec": 0, 00:27:47.627 "w_mbytes_per_sec": 0 00:27:47.627 }, 00:27:47.627 "claimed": false, 00:27:47.627 "zoned": false, 00:27:47.627 "supported_io_types": { 00:27:47.627 "read": true, 00:27:47.627 "write": true, 00:27:47.627 "unmap": false, 00:27:47.627 "write_zeroes": true, 00:27:47.627 "flush": true, 00:27:47.627 "reset": true, 00:27:47.627 "compare": true, 00:27:47.627 "compare_and_write": true, 00:27:47.627 "abort": true, 00:27:47.627 "nvme_admin": true, 00:27:47.627 "nvme_io": true 00:27:47.627 }, 00:27:47.627 "memory_domains": [ 00:27:47.627 { 00:27:47.627 "dma_device_id": "system", 00:27:47.627 "dma_device_type": 1 00:27:47.627 } 00:27:47.627 ], 00:27:47.627 "driver_specific": { 00:27:47.627 "nvme": [ 00:27:47.627 { 00:27:47.627 "trid": { 00:27:47.627 "trtype": "TCP", 00:27:47.627 "adrfam": "IPv4", 00:27:47.627 "traddr": "10.0.0.2", 00:27:47.627 "trsvcid": "4421", 00:27:47.627 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:47.627 }, 00:27:47.627 "ctrlr_data": { 00:27:47.627 "cntlid": 3, 00:27:47.627 "vendor_id": "0x8086", 00:27:47.627 "model_number": "SPDK bdev Controller", 00:27:47.627 "serial_number": "00000000000000000000", 00:27:47.627 "firmware_revision": "24.05.1", 00:27:47.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:47.627 "oacs": { 00:27:47.627 "security": 0, 00:27:47.627 "format": 0, 00:27:47.627 "firmware": 0, 00:27:47.627 "ns_manage": 0 00:27:47.627 }, 00:27:47.627 "multi_ctrlr": true, 00:27:47.627 "ana_reporting": false 00:27:47.627 }, 00:27:47.627 "vs": { 00:27:47.627 "nvme_version": "1.3" 00:27:47.627 }, 00:27:47.627 "ns_data": { 00:27:47.627 "id": 1, 00:27:47.627 "can_share": true 00:27:47.627 } 00:27:47.627 } 00:27:47.627 ], 00:27:47.627 "mp_policy": "active_passive" 00:27:47.627 } 00:27:47.627 } 00:27:47.627 ] 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.XsB2T9ntDn 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:47.627 18:57:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:47.627 rmmod nvme_tcp 00:27:47.926 rmmod nvme_fabrics 00:27:47.926 rmmod nvme_keyring 00:27:47.926 18:57:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:47.926 18:57:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:47.926 18:57:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:47.926 18:57:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1478810 ']' 00:27:47.926 18:57:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1478810 00:27:47.926 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 1478810 ']' 00:27:47.926 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 1478810 00:27:47.926 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:27:47.926 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:47.926 18:57:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1478810 00:27:47.926 18:57:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:47.926 18:57:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:47.926 18:57:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1478810' 00:27:47.926 killing process with pid 1478810 00:27:47.926 18:57:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 1478810 00:27:47.926 [2024-07-20 18:57:58.014588] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:47.926 [2024-07-20 18:57:58.014625] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:47.926 18:57:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 1478810 00:27:47.926 18:57:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:47.926 18:57:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:47.926 18:57:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:47.926 18:57:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:47.926 18:57:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:47.926 18:57:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.926 18:57:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:47.926 18:57:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.454 18:58:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:50.454 00:27:50.454 real 0m5.543s 00:27:50.454 user 0m2.058s 00:27:50.454 sys 0m1.856s 00:27:50.454 18:58:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:50.454 18:58:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:50.454 ************************************ 00:27:50.454 END TEST nvmf_async_init 00:27:50.454 ************************************ 00:27:50.454 18:58:00 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:50.454 18:58:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:50.454 18:58:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:50.454 18:58:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.454 ************************************ 00:27:50.454 START TEST dma 00:27:50.454 ************************************ 00:27:50.454 18:58:00 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:50.454 * Looking for test storage... 00:27:50.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:50.454 18:58:00 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.454 18:58:00 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.454 18:58:00 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.454 18:58:00 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.454 18:58:00 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.454 18:58:00 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.454 18:58:00 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.454 18:58:00 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:27:50.454 18:58:00 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:50.454 18:58:00 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:50.454 18:58:00 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:50.454 18:58:00 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:27:50.454 00:27:50.454 real 0m0.068s 00:27:50.454 user 0m0.035s 00:27:50.454 sys 0m0.038s 00:27:50.454 18:58:00 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:50.455 18:58:00 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:27:50.455 ************************************ 00:27:50.455 END TEST dma 00:27:50.455 ************************************ 00:27:50.455 18:58:00 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:50.455 18:58:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:50.455 18:58:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:50.455 18:58:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.455 ************************************ 00:27:50.455 START TEST nvmf_identify 00:27:50.455 ************************************ 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:50.455 * Looking for test storage... 00:27:50.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:50.455 18:58:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:52.354 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:52.354 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:52.354 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:52.354 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:52.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:27:52.354 00:27:52.354 --- 10.0.0.2 ping statistics --- 00:27:52.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.354 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:27:52.354 00:27:52.354 --- 10.0.0.1 ping statistics --- 00:27:52.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.354 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1480913 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1480913 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 1480913 ']' 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:52.354 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.355 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:52.355 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:52.355 [2024-07-20 18:58:02.630308] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:52.355 [2024-07-20 18:58:02.630381] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.355 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.612 [2024-07-20 18:58:02.700240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.612 [2024-07-20 18:58:02.790456] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.612 [2024-07-20 18:58:02.790505] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.612 [2024-07-20 18:58:02.790527] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.612 [2024-07-20 18:58:02.790539] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.612 [2024-07-20 18:58:02.790549] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.612 [2024-07-20 18:58:02.790610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.612 [2024-07-20 18:58:02.790663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.612 [2024-07-20 18:58:02.790731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.612 [2024-07-20 18:58:02.790733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.612 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:52.612 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:27:52.612 18:58:02 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:52.612 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.612 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:52.612 [2024-07-20 18:58:02.910258] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.612 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.612 18:58:02 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:52.612 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:52.612 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:52.870 18:58:02 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:52.870 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.870 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:52.870 Malloc0 00:27:52.870 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.870 18:58:02 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:52.870 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.870 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:52.870 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.870 18:58:02 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:52.871 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.871 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:52.871 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.871 18:58:02 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.871 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.871 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:52.871 [2024-07-20 18:58:02.981047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.871 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.871 18:58:02 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:52.871 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.871 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:52.871 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.871 18:58:02 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:52.871 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.871 18:58:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:52.871 [ 00:27:52.871 { 00:27:52.871 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:52.871 "subtype": "Discovery", 00:27:52.871 "listen_addresses": [ 00:27:52.871 { 00:27:52.871 "trtype": "TCP", 00:27:52.871 "adrfam": "IPv4", 00:27:52.871 "traddr": "10.0.0.2", 00:27:52.871 "trsvcid": "4420" 00:27:52.871 } 00:27:52.871 ], 00:27:52.871 "allow_any_host": true, 00:27:52.871 "hosts": [] 00:27:52.871 }, 00:27:52.871 { 00:27:52.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:52.871 "subtype": "NVMe", 00:27:52.871 "listen_addresses": [ 00:27:52.871 { 00:27:52.871 "trtype": "TCP", 00:27:52.871 "adrfam": "IPv4", 00:27:52.871 "traddr": "10.0.0.2", 00:27:52.871 "trsvcid": "4420" 00:27:52.871 } 00:27:52.871 ], 00:27:52.871 "allow_any_host": true, 00:27:52.871 "hosts": [], 00:27:52.871 "serial_number": "SPDK00000000000001", 00:27:52.871 "model_number": "SPDK bdev Controller", 00:27:52.871 "max_namespaces": 32, 00:27:52.871 "min_cntlid": 1, 00:27:52.871 "max_cntlid": 65519, 00:27:52.871 "namespaces": [ 00:27:52.871 { 00:27:52.871 "nsid": 1, 00:27:52.871 "bdev_name": "Malloc0", 00:27:52.871 "name": "Malloc0", 00:27:52.871 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:52.871 "eui64": "ABCDEF0123456789", 00:27:52.871 "uuid": "d965a9f1-32c5-4650-a182-8e5446efa84d" 00:27:52.871 } 00:27:52.871 ] 00:27:52.871 } 00:27:52.871 ] 00:27:52.871 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.871 18:58:03 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:52.871 [2024-07-20 18:58:03.018484] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:52.871 [2024-07-20 18:58:03.018521] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481056 ] 00:27:52.871 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.871 [2024-07-20 18:58:03.052919] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:52.871 [2024-07-20 18:58:03.052977] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:52.871 [2024-07-20 18:58:03.052987] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:52.871 [2024-07-20 18:58:03.053002] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:52.871 [2024-07-20 18:58:03.053015] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:52.871 [2024-07-20 18:58:03.053361] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:52.871 [2024-07-20 18:58:03.053421] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1bbd980 0 00:27:52.871 [2024-07-20 18:58:03.066820] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:52.871 [2024-07-20 18:58:03.066841] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:52.871 [2024-07-20 18:58:03.066849] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:52.871 [2024-07-20 18:58:03.066855] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:52.871 [2024-07-20 18:58:03.066920] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.066933] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.066941] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbd980) 00:27:52.871 [2024-07-20 18:58:03.066958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:52.871 [2024-07-20 18:58:03.066985] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c254c0, cid 0, qid 0 00:27:52.871 [2024-07-20 18:58:03.073819] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.871 [2024-07-20 18:58:03.073837] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.871 [2024-07-20 18:58:03.073845] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.073853] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c254c0) on tqpair=0x1bbd980 00:27:52.871 [2024-07-20 18:58:03.073869] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:52.871 [2024-07-20 18:58:03.073895] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:52.871 [2024-07-20 18:58:03.073905] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:52.871 [2024-07-20 18:58:03.073927] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.073935] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.073942] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbd980) 00:27:52.871 [2024-07-20 18:58:03.073954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-20 18:58:03.073982] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c254c0, cid 0, qid 0 00:27:52.871 [2024-07-20 18:58:03.074244] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.871 [2024-07-20 18:58:03.074260] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.871 [2024-07-20 18:58:03.074267] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.074274] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c254c0) on tqpair=0x1bbd980 00:27:52.871 [2024-07-20 18:58:03.074289] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:52.871 [2024-07-20 18:58:03.074304] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:52.871 [2024-07-20 18:58:03.074316] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.074324] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.074331] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbd980) 00:27:52.871 [2024-07-20 18:58:03.074342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-20 18:58:03.074363] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c254c0, cid 0, qid 0 00:27:52.871 [2024-07-20 18:58:03.074597] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.871 [2024-07-20 18:58:03.074613] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.871 [2024-07-20 18:58:03.074620] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.074626] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c254c0) on tqpair=0x1bbd980 00:27:52.871 [2024-07-20 18:58:03.074637] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:52.871 [2024-07-20 18:58:03.074651] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:52.871 [2024-07-20 18:58:03.074663] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.074671] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.074677] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbd980) 00:27:52.871 [2024-07-20 18:58:03.074688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-20 18:58:03.074709] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c254c0, cid 0, qid 0 00:27:52.871 [2024-07-20 18:58:03.074941] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.871 [2024-07-20 18:58:03.074957] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.871 [2024-07-20 18:58:03.074964] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.074971] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c254c0) on tqpair=0x1bbd980 00:27:52.871 [2024-07-20 18:58:03.074981] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:52.871 [2024-07-20 18:58:03.074998] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.075007] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.075013] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbd980) 00:27:52.871 [2024-07-20 18:58:03.075024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-20 18:58:03.075046] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c254c0, cid 0, qid 0 00:27:52.871 [2024-07-20 18:58:03.075276] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.871 [2024-07-20 18:58:03.075296] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.871 [2024-07-20 18:58:03.075303] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.075310] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c254c0) on tqpair=0x1bbd980 00:27:52.871 [2024-07-20 18:58:03.075320] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:52.871 [2024-07-20 18:58:03.075329] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:52.871 [2024-07-20 18:58:03.075342] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:52.871 [2024-07-20 18:58:03.075452] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:52.871 [2024-07-20 18:58:03.075461] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:52.871 [2024-07-20 18:58:03.075475] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.075482] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.075489] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbd980) 00:27:52.871 [2024-07-20 18:58:03.075499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-20 18:58:03.075520] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c254c0, cid 0, qid 0 00:27:52.871 [2024-07-20 18:58:03.075774] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.871 [2024-07-20 18:58:03.075787] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.871 [2024-07-20 18:58:03.075802] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.075809] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c254c0) on tqpair=0x1bbd980 00:27:52.871 [2024-07-20 18:58:03.075819] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:52.871 [2024-07-20 18:58:03.075836] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.075844] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.075851] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbd980) 00:27:52.871 [2024-07-20 18:58:03.075862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-20 18:58:03.075883] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c254c0, cid 0, qid 0 00:27:52.871 [2024-07-20 18:58:03.076116] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.871 [2024-07-20 18:58:03.076128] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.871 [2024-07-20 18:58:03.076135] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.076142] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c254c0) on tqpair=0x1bbd980 00:27:52.871 [2024-07-20 18:58:03.076151] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:52.871 [2024-07-20 18:58:03.076159] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:52.871 [2024-07-20 18:58:03.076173] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:52.871 [2024-07-20 18:58:03.076192] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:52.871 [2024-07-20 18:58:03.076213] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.076222] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbd980) 00:27:52.871 [2024-07-20 18:58:03.076233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-20 18:58:03.076254] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c254c0, cid 0, qid 0 00:27:52.871 [2024-07-20 18:58:03.076527] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.871 [2024-07-20 18:58:03.076542] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.871 [2024-07-20 18:58:03.076549] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.076556] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbd980): datao=0, datal=4096, cccid=0 00:27:52.871 [2024-07-20 18:58:03.076564] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c254c0) on tqpair(0x1bbd980): expected_datao=0, payload_size=4096 00:27:52.871 [2024-07-20 18:58:03.076572] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.076583] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.076592] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.076692] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.871 [2024-07-20 18:58:03.076704] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.871 [2024-07-20 18:58:03.076711] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.076718] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c254c0) on tqpair=0x1bbd980 00:27:52.871 [2024-07-20 18:58:03.076735] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:52.871 [2024-07-20 18:58:03.076745] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:52.871 [2024-07-20 18:58:03.076753] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:52.871 [2024-07-20 18:58:03.076761] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:52.871 [2024-07-20 18:58:03.076769] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:52.871 [2024-07-20 18:58:03.076778] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:52.871 [2024-07-20 18:58:03.076807] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:52.871 [2024-07-20 18:58:03.076820] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.076828] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.871 [2024-07-20 18:58:03.076835] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbd980) 00:27:52.871 [2024-07-20 18:58:03.076846] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:52.871 [2024-07-20 18:58:03.076867] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c254c0, cid 0, qid 0 00:27:52.871 [2024-07-20 18:58:03.077104] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.871 [2024-07-20 18:58:03.077116] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.872 [2024-07-20 18:58:03.077124] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.077130] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c254c0) on tqpair=0x1bbd980 00:27:52.872 [2024-07-20 18:58:03.077143] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.077151] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.077161] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbd980) 00:27:52.872 [2024-07-20 18:58:03.077171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.872 [2024-07-20 18:58:03.077182] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.077189] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.077195] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1bbd980) 00:27:52.872 [2024-07-20 18:58:03.077204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.872 [2024-07-20 18:58:03.077214] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.077221] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.077227] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1bbd980) 00:27:52.872 [2024-07-20 18:58:03.077236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.872 [2024-07-20 18:58:03.077246] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.077253] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.077259] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbd980) 00:27:52.872 [2024-07-20 18:58:03.077284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.872 [2024-07-20 18:58:03.077293] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:52.872 [2024-07-20 18:58:03.077312] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:52.872 [2024-07-20 18:58:03.077325] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.077332] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbd980) 00:27:52.872 [2024-07-20 18:58:03.077341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-20 18:58:03.077363] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c254c0, cid 0, qid 0 00:27:52.872 [2024-07-20 18:58:03.077388] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25620, cid 1, qid 0 00:27:52.872 [2024-07-20 18:58:03.077397] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25780, cid 2, qid 0 00:27:52.872 [2024-07-20 18:58:03.077404] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c258e0, cid 3, qid 0 00:27:52.872 [2024-07-20 18:58:03.077412] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25a40, cid 4, qid 0 00:27:52.872 [2024-07-20 18:58:03.077671] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.872 [2024-07-20 18:58:03.077687] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.872 [2024-07-20 18:58:03.077694] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.077701] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c25a40) on tqpair=0x1bbd980 00:27:52.872 [2024-07-20 18:58:03.077711] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:52.872 [2024-07-20 18:58:03.077720] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:52.872 [2024-07-20 18:58:03.077738] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.077747] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbd980) 00:27:52.872 [2024-07-20 18:58:03.077758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-20 18:58:03.077782] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25a40, cid 4, qid 0 00:27:52.872 [2024-07-20 18:58:03.081823] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.872 [2024-07-20 18:58:03.081838] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.872 [2024-07-20 18:58:03.081845] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.081851] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbd980): datao=0, datal=4096, cccid=4 00:27:52.872 [2024-07-20 18:58:03.081859] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c25a40) on tqpair(0x1bbd980): expected_datao=0, payload_size=4096 00:27:52.872 [2024-07-20 18:58:03.081866] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.081876] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.081884] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.121820] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.872 [2024-07-20 18:58:03.121838] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.872 [2024-07-20 18:58:03.121845] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.121852] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c25a40) on tqpair=0x1bbd980 00:27:52.872 [2024-07-20 18:58:03.121871] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:52.872 [2024-07-20 18:58:03.121923] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.121934] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbd980) 00:27:52.872 [2024-07-20 18:58:03.121946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-20 18:58:03.121957] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.121965] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.121971] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bbd980) 00:27:52.872 [2024-07-20 18:58:03.121980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.872 [2024-07-20 18:58:03.122011] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25a40, cid 4, qid 0 00:27:52.872 [2024-07-20 18:58:03.122023] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25ba0, cid 5, qid 0 00:27:52.872 [2024-07-20 18:58:03.122303] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.872 [2024-07-20 18:58:03.122319] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.872 [2024-07-20 18:58:03.122326] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.122332] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbd980): datao=0, datal=1024, cccid=4 00:27:52.872 [2024-07-20 18:58:03.122340] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c25a40) on tqpair(0x1bbd980): expected_datao=0, payload_size=1024 00:27:52.872 [2024-07-20 18:58:03.122348] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.122357] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.122365] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.122374] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.872 [2024-07-20 18:58:03.122383] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.872 [2024-07-20 18:58:03.122390] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.122397] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c25ba0) on tqpair=0x1bbd980 00:27:52.872 [2024-07-20 18:58:03.167806] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.872 [2024-07-20 18:58:03.167824] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.872 [2024-07-20 18:58:03.167832] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.167839] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c25a40) on tqpair=0x1bbd980 00:27:52.872 [2024-07-20 18:58:03.167858] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.167867] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbd980) 00:27:52.872 [2024-07-20 18:58:03.167878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-20 18:58:03.167922] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25a40, cid 4, qid 0 00:27:52.872 [2024-07-20 18:58:03.168184] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.872 [2024-07-20 18:58:03.168197] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.872 [2024-07-20 18:58:03.168204] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.168210] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbd980): datao=0, datal=3072, cccid=4 00:27:52.872 [2024-07-20 18:58:03.168218] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c25a40) on tqpair(0x1bbd980): expected_datao=0, payload_size=3072 00:27:52.872 [2024-07-20 18:58:03.168226] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.168236] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.168244] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.168346] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.872 [2024-07-20 18:58:03.168358] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.872 [2024-07-20 18:58:03.168365] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.168372] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c25a40) on tqpair=0x1bbd980 00:27:52.872 [2024-07-20 18:58:03.168388] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.168396] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbd980) 00:27:52.872 [2024-07-20 18:58:03.168407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-20 18:58:03.168435] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25a40, cid 4, qid 0 00:27:52.872 [2024-07-20 18:58:03.168655] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.872 [2024-07-20 18:58:03.168670] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.872 [2024-07-20 18:58:03.168677] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.168683] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbd980): datao=0, datal=8, cccid=4 00:27:52.872 [2024-07-20 18:58:03.168691] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c25a40) on tqpair(0x1bbd980): expected_datao=0, payload_size=8 00:27:52.872 [2024-07-20 18:58:03.168699] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.168708] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.872 [2024-07-20 18:58:03.168716] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:53.133 [2024-07-20 18:58:03.209028] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.133 [2024-07-20 18:58:03.209046] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.133 [2024-07-20 18:58:03.209054] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.133 [2024-07-20 18:58:03.209061] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c25a40) on tqpair=0x1bbd980 00:27:53.133 ===================================================== 00:27:53.133 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:53.133 ===================================================== 00:27:53.133 Controller Capabilities/Features 00:27:53.133 ================================ 00:27:53.133 Vendor ID: 0000 00:27:53.133 Subsystem Vendor ID: 0000 00:27:53.133 Serial Number: .................... 00:27:53.133 Model Number: ........................................ 00:27:53.133 Firmware Version: 24.05.1 00:27:53.133 Recommended Arb Burst: 0 00:27:53.133 IEEE OUI Identifier: 00 00 00 00:27:53.133 Multi-path I/O 00:27:53.133 May have multiple subsystem ports: No 00:27:53.133 May have multiple controllers: No 00:27:53.133 Associated with SR-IOV VF: No 00:27:53.133 Max Data Transfer Size: 131072 00:27:53.133 Max Number of Namespaces: 0 00:27:53.133 Max Number of I/O Queues: 1024 00:27:53.133 NVMe Specification Version (VS): 1.3 00:27:53.133 NVMe Specification Version (Identify): 1.3 00:27:53.133 Maximum Queue Entries: 128 00:27:53.133 Contiguous Queues Required: Yes 00:27:53.133 Arbitration Mechanisms Supported 00:27:53.133 Weighted Round Robin: Not Supported 00:27:53.133 Vendor Specific: Not Supported 00:27:53.133 Reset Timeout: 15000 ms 00:27:53.133 Doorbell Stride: 4 bytes 00:27:53.133 NVM Subsystem Reset: Not Supported 00:27:53.133 Command Sets Supported 00:27:53.133 NVM Command Set: Supported 00:27:53.133 Boot Partition: Not Supported 00:27:53.133 Memory Page Size Minimum: 4096 bytes 00:27:53.133 Memory Page Size Maximum: 4096 bytes 00:27:53.133 Persistent Memory Region: Not Supported 00:27:53.133 Optional Asynchronous Events Supported 00:27:53.133 Namespace Attribute Notices: Not Supported 00:27:53.133 Firmware Activation Notices: Not Supported 00:27:53.133 ANA Change Notices: Not Supported 00:27:53.133 PLE Aggregate Log Change Notices: Not Supported 00:27:53.133 LBA Status Info Alert Notices: Not Supported 00:27:53.133 EGE Aggregate Log Change Notices: Not Supported 00:27:53.133 Normal NVM Subsystem Shutdown event: Not Supported 00:27:53.133 Zone Descriptor Change Notices: Not Supported 00:27:53.133 Discovery Log Change Notices: Supported 00:27:53.133 Controller Attributes 00:27:53.133 128-bit Host Identifier: Not Supported 00:27:53.133 Non-Operational Permissive Mode: Not Supported 00:27:53.133 NVM Sets: Not Supported 00:27:53.133 Read Recovery Levels: Not Supported 00:27:53.133 Endurance Groups: Not Supported 00:27:53.133 Predictable Latency Mode: Not Supported 00:27:53.133 Traffic Based Keep ALive: Not Supported 00:27:53.133 Namespace Granularity: Not Supported 00:27:53.133 SQ Associations: Not Supported 00:27:53.133 UUID List: Not Supported 00:27:53.133 Multi-Domain Subsystem: Not Supported 00:27:53.133 Fixed Capacity Management: Not Supported 00:27:53.133 Variable Capacity Management: Not Supported 00:27:53.133 Delete Endurance Group: Not Supported 00:27:53.133 Delete NVM Set: Not Supported 00:27:53.133 Extended LBA Formats Supported: Not Supported 00:27:53.133 Flexible Data Placement Supported: Not Supported 00:27:53.133 00:27:53.133 Controller Memory Buffer Support 00:27:53.133 ================================ 00:27:53.133 Supported: No 00:27:53.133 00:27:53.133 Persistent Memory Region Support 00:27:53.133 ================================ 00:27:53.133 Supported: No 00:27:53.133 00:27:53.133 Admin Command Set Attributes 00:27:53.133 ============================ 00:27:53.133 Security Send/Receive: Not Supported 00:27:53.133 Format NVM: Not Supported 00:27:53.133 Firmware Activate/Download: Not Supported 00:27:53.133 Namespace Management: Not Supported 00:27:53.133 Device Self-Test: Not Supported 00:27:53.133 Directives: Not Supported 00:27:53.133 NVMe-MI: Not Supported 00:27:53.133 Virtualization Management: Not Supported 00:27:53.133 Doorbell Buffer Config: Not Supported 00:27:53.133 Get LBA Status Capability: Not Supported 00:27:53.133 Command & Feature Lockdown Capability: Not Supported 00:27:53.133 Abort Command Limit: 1 00:27:53.133 Async Event Request Limit: 4 00:27:53.133 Number of Firmware Slots: N/A 00:27:53.133 Firmware Slot 1 Read-Only: N/A 00:27:53.133 Firmware Activation Without Reset: N/A 00:27:53.133 Multiple Update Detection Support: N/A 00:27:53.133 Firmware Update Granularity: No Information Provided 00:27:53.133 Per-Namespace SMART Log: No 00:27:53.133 Asymmetric Namespace Access Log Page: Not Supported 00:27:53.133 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:53.133 Command Effects Log Page: Not Supported 00:27:53.133 Get Log Page Extended Data: Supported 00:27:53.133 Telemetry Log Pages: Not Supported 00:27:53.133 Persistent Event Log Pages: Not Supported 00:27:53.133 Supported Log Pages Log Page: May Support 00:27:53.133 Commands Supported & Effects Log Page: Not Supported 00:27:53.133 Feature Identifiers & Effects Log Page:May Support 00:27:53.133 NVMe-MI Commands & Effects Log Page: May Support 00:27:53.133 Data Area 4 for Telemetry Log: Not Supported 00:27:53.133 Error Log Page Entries Supported: 128 00:27:53.133 Keep Alive: Not Supported 00:27:53.133 00:27:53.133 NVM Command Set Attributes 00:27:53.133 ========================== 00:27:53.133 Submission Queue Entry Size 00:27:53.133 Max: 1 00:27:53.133 Min: 1 00:27:53.133 Completion Queue Entry Size 00:27:53.133 Max: 1 00:27:53.133 Min: 1 00:27:53.133 Number of Namespaces: 0 00:27:53.133 Compare Command: Not Supported 00:27:53.133 Write Uncorrectable Command: Not Supported 00:27:53.133 Dataset Management Command: Not Supported 00:27:53.133 Write Zeroes Command: Not Supported 00:27:53.133 Set Features Save Field: Not Supported 00:27:53.133 Reservations: Not Supported 00:27:53.133 Timestamp: Not Supported 00:27:53.133 Copy: Not Supported 00:27:53.133 Volatile Write Cache: Not Present 00:27:53.133 Atomic Write Unit (Normal): 1 00:27:53.133 Atomic Write Unit (PFail): 1 00:27:53.133 Atomic Compare & Write Unit: 1 00:27:53.133 Fused Compare & Write: Supported 00:27:53.133 Scatter-Gather List 00:27:53.133 SGL Command Set: Supported 00:27:53.133 SGL Keyed: Supported 00:27:53.133 SGL Bit Bucket Descriptor: Not Supported 00:27:53.133 SGL Metadata Pointer: Not Supported 00:27:53.133 Oversized SGL: Not Supported 00:27:53.133 SGL Metadata Address: Not Supported 00:27:53.133 SGL Offset: Supported 00:27:53.133 Transport SGL Data Block: Not Supported 00:27:53.133 Replay Protected Memory Block: Not Supported 00:27:53.133 00:27:53.133 Firmware Slot Information 00:27:53.133 ========================= 00:27:53.133 Active slot: 0 00:27:53.133 00:27:53.133 00:27:53.133 Error Log 00:27:53.133 ========= 00:27:53.133 00:27:53.133 Active Namespaces 00:27:53.133 ================= 00:27:53.133 Discovery Log Page 00:27:53.133 ================== 00:27:53.133 Generation Counter: 2 00:27:53.133 Number of Records: 2 00:27:53.133 Record Format: 0 00:27:53.133 00:27:53.133 Discovery Log Entry 0 00:27:53.133 ---------------------- 00:27:53.133 Transport Type: 3 (TCP) 00:27:53.133 Address Family: 1 (IPv4) 00:27:53.133 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:53.133 Entry Flags: 00:27:53.133 Duplicate Returned Information: 1 00:27:53.133 Explicit Persistent Connection Support for Discovery: 1 00:27:53.133 Transport Requirements: 00:27:53.133 Secure Channel: Not Required 00:27:53.133 Port ID: 0 (0x0000) 00:27:53.133 Controller ID: 65535 (0xffff) 00:27:53.133 Admin Max SQ Size: 128 00:27:53.133 Transport Service Identifier: 4420 00:27:53.133 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:53.134 Transport Address: 10.0.0.2 00:27:53.134 Discovery Log Entry 1 00:27:53.134 ---------------------- 00:27:53.134 Transport Type: 3 (TCP) 00:27:53.134 Address Family: 1 (IPv4) 00:27:53.134 Subsystem Type: 2 (NVM Subsystem) 00:27:53.134 Entry Flags: 00:27:53.134 Duplicate Returned Information: 0 00:27:53.134 Explicit Persistent Connection Support for Discovery: 0 00:27:53.134 Transport Requirements: 00:27:53.134 Secure Channel: Not Required 00:27:53.134 Port ID: 0 (0x0000) 00:27:53.134 Controller ID: 65535 (0xffff) 00:27:53.134 Admin Max SQ Size: 128 00:27:53.134 Transport Service Identifier: 4420 00:27:53.134 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:53.134 Transport Address: 10.0.0.2 [2024-07-20 18:58:03.209175] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:53.134 [2024-07-20 18:58:03.209203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.134 [2024-07-20 18:58:03.209216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.134 [2024-07-20 18:58:03.209226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.134 [2024-07-20 18:58:03.209236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.134 [2024-07-20 18:58:03.209253] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.209262] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.209269] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbd980) 00:27:53.134 [2024-07-20 18:58:03.209280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.134 [2024-07-20 18:58:03.209320] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c258e0, cid 3, qid 0 00:27:53.134 [2024-07-20 18:58:03.209597] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.134 [2024-07-20 18:58:03.209613] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.134 [2024-07-20 18:58:03.209620] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.209627] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c258e0) on tqpair=0x1bbd980 00:27:53.134 [2024-07-20 18:58:03.209641] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.209649] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.209655] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbd980) 00:27:53.134 [2024-07-20 18:58:03.209666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.134 [2024-07-20 18:58:03.209693] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c258e0, cid 3, qid 0 00:27:53.134 [2024-07-20 18:58:03.209942] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.134 [2024-07-20 18:58:03.209956] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.134 [2024-07-20 18:58:03.209963] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.209970] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c258e0) on tqpair=0x1bbd980 00:27:53.134 [2024-07-20 18:58:03.209979] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:53.134 [2024-07-20 18:58:03.209988] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:53.134 [2024-07-20 18:58:03.210004] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.210013] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.210019] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbd980) 00:27:53.134 [2024-07-20 18:58:03.210030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.134 [2024-07-20 18:58:03.210051] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c258e0, cid 3, qid 0 00:27:53.134 [2024-07-20 18:58:03.210288] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.134 [2024-07-20 18:58:03.210300] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.134 [2024-07-20 18:58:03.210308] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.210314] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c258e0) on tqpair=0x1bbd980 00:27:53.134 [2024-07-20 18:58:03.210332] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.210345] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.210352] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbd980) 00:27:53.134 [2024-07-20 18:58:03.210363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.134 [2024-07-20 18:58:03.210383] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c258e0, cid 3, qid 0 00:27:53.134 [2024-07-20 18:58:03.210611] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.134 [2024-07-20 18:58:03.210623] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.134 [2024-07-20 18:58:03.210630] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.210637] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c258e0) on tqpair=0x1bbd980 00:27:53.134 [2024-07-20 18:58:03.210655] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.210663] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.210670] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbd980) 00:27:53.134 [2024-07-20 18:58:03.210680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.134 [2024-07-20 18:58:03.210701] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c258e0, cid 3, qid 0 00:27:53.134 [2024-07-20 18:58:03.210933] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.134 [2024-07-20 18:58:03.210949] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.134 [2024-07-20 18:58:03.210956] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.210963] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c258e0) on tqpair=0x1bbd980 00:27:53.134 [2024-07-20 18:58:03.210981] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.210991] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.210997] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbd980) 00:27:53.134 [2024-07-20 18:58:03.211008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.134 [2024-07-20 18:58:03.211028] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c258e0, cid 3, qid 0 00:27:53.134 [2024-07-20 18:58:03.211260] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.134 [2024-07-20 18:58:03.211272] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.134 [2024-07-20 18:58:03.211279] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.211286] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c258e0) on tqpair=0x1bbd980 00:27:53.134 [2024-07-20 18:58:03.211303] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.211312] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.211319] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbd980) 00:27:53.134 [2024-07-20 18:58:03.211329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.134 [2024-07-20 18:58:03.211349] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c258e0, cid 3, qid 0 00:27:53.134 [2024-07-20 18:58:03.211581] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.134 [2024-07-20 18:58:03.211593] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.134 [2024-07-20 18:58:03.211600] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.211607] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c258e0) on tqpair=0x1bbd980 00:27:53.134 [2024-07-20 18:58:03.211624] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.211634] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.211644] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbd980) 00:27:53.134 [2024-07-20 18:58:03.211656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.134 [2024-07-20 18:58:03.211677] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c258e0, cid 3, qid 0 00:27:53.134 [2024-07-20 18:58:03.215818] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.134 [2024-07-20 18:58:03.215834] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.134 [2024-07-20 18:58:03.215841] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.215848] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c258e0) on tqpair=0x1bbd980 00:27:53.134 [2024-07-20 18:58:03.215866] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.215890] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.215897] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbd980) 00:27:53.134 [2024-07-20 18:58:03.215908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.134 [2024-07-20 18:58:03.215931] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c258e0, cid 3, qid 0 00:27:53.134 [2024-07-20 18:58:03.216170] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.134 [2024-07-20 18:58:03.216185] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.134 [2024-07-20 18:58:03.216192] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.134 [2024-07-20 18:58:03.216199] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c258e0) on tqpair=0x1bbd980 00:27:53.134 [2024-07-20 18:58:03.216214] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:27:53.134 00:27:53.134 18:58:03 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:53.134 [2024-07-20 18:58:03.251010] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:53.135 [2024-07-20 18:58:03.251058] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481059 ] 00:27:53.135 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.135 [2024-07-20 18:58:03.286491] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:53.135 [2024-07-20 18:58:03.286540] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:53.135 [2024-07-20 18:58:03.286550] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:53.135 [2024-07-20 18:58:03.286564] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:53.135 [2024-07-20 18:58:03.286575] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:53.135 [2024-07-20 18:58:03.286946] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:53.135 [2024-07-20 18:58:03.286990] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b29980 0 00:27:53.135 [2024-07-20 18:58:03.293819] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:53.135 [2024-07-20 18:58:03.293837] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:53.135 [2024-07-20 18:58:03.293844] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:53.135 [2024-07-20 18:58:03.293854] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:53.135 [2024-07-20 18:58:03.293907] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.293919] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.293926] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b29980) 00:27:53.135 [2024-07-20 18:58:03.293940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:53.135 [2024-07-20 18:58:03.293966] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b914c0, cid 0, qid 0 00:27:53.135 [2024-07-20 18:58:03.300819] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.135 [2024-07-20 18:58:03.300837] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.135 [2024-07-20 18:58:03.300844] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.300851] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b914c0) on tqpair=0x1b29980 00:27:53.135 [2024-07-20 18:58:03.300867] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:53.135 [2024-07-20 18:58:03.300877] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:53.135 [2024-07-20 18:58:03.300886] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:53.135 [2024-07-20 18:58:03.300905] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.300914] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.300921] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b29980) 00:27:53.135 [2024-07-20 18:58:03.300932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.135 [2024-07-20 18:58:03.300956] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b914c0, cid 0, qid 0 00:27:53.135 [2024-07-20 18:58:03.301193] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.135 [2024-07-20 18:58:03.301206] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.135 [2024-07-20 18:58:03.301213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.301220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b914c0) on tqpair=0x1b29980 00:27:53.135 [2024-07-20 18:58:03.301233] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:53.135 [2024-07-20 18:58:03.301247] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:53.135 [2024-07-20 18:58:03.301260] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.301267] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.301274] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b29980) 00:27:53.135 [2024-07-20 18:58:03.301284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.135 [2024-07-20 18:58:03.301306] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b914c0, cid 0, qid 0 00:27:53.135 [2024-07-20 18:58:03.301535] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.135 [2024-07-20 18:58:03.301547] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.135 [2024-07-20 18:58:03.301554] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.301560] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b914c0) on tqpair=0x1b29980 00:27:53.135 [2024-07-20 18:58:03.301570] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:53.135 [2024-07-20 18:58:03.301584] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:53.135 [2024-07-20 18:58:03.301600] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.301608] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.301615] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b29980) 00:27:53.135 [2024-07-20 18:58:03.301625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.135 [2024-07-20 18:58:03.301646] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b914c0, cid 0, qid 0 00:27:53.135 [2024-07-20 18:58:03.301870] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.135 [2024-07-20 18:58:03.301884] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.135 [2024-07-20 18:58:03.301891] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.301898] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b914c0) on tqpair=0x1b29980 00:27:53.135 [2024-07-20 18:58:03.301907] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:53.135 [2024-07-20 18:58:03.301924] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.301933] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.301939] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b29980) 00:27:53.135 [2024-07-20 18:58:03.301950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.135 [2024-07-20 18:58:03.301971] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b914c0, cid 0, qid 0 00:27:53.135 [2024-07-20 18:58:03.302208] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.135 [2024-07-20 18:58:03.302223] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.135 [2024-07-20 18:58:03.302230] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.302237] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b914c0) on tqpair=0x1b29980 00:27:53.135 [2024-07-20 18:58:03.302246] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:53.135 [2024-07-20 18:58:03.302255] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:53.135 [2024-07-20 18:58:03.302268] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:53.135 [2024-07-20 18:58:03.302378] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:53.135 [2024-07-20 18:58:03.302386] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:53.135 [2024-07-20 18:58:03.302398] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.302406] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.302412] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b29980) 00:27:53.135 [2024-07-20 18:58:03.302422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.135 [2024-07-20 18:58:03.302443] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b914c0, cid 0, qid 0 00:27:53.135 [2024-07-20 18:58:03.302688] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.135 [2024-07-20 18:58:03.302704] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.135 [2024-07-20 18:58:03.302711] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.302717] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b914c0) on tqpair=0x1b29980 00:27:53.135 [2024-07-20 18:58:03.302727] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:53.135 [2024-07-20 18:58:03.302748] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.302758] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.302765] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b29980) 00:27:53.135 [2024-07-20 18:58:03.302775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.135 [2024-07-20 18:58:03.302804] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b914c0, cid 0, qid 0 00:27:53.135 [2024-07-20 18:58:03.303039] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.135 [2024-07-20 18:58:03.303051] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.135 [2024-07-20 18:58:03.303058] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.303065] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b914c0) on tqpair=0x1b29980 00:27:53.135 [2024-07-20 18:58:03.303073] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:53.135 [2024-07-20 18:58:03.303082] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:53.135 [2024-07-20 18:58:03.303095] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:53.135 [2024-07-20 18:58:03.303112] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:53.135 [2024-07-20 18:58:03.303128] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.303137] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b29980) 00:27:53.135 [2024-07-20 18:58:03.303148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.135 [2024-07-20 18:58:03.303169] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b914c0, cid 0, qid 0 00:27:53.135 [2024-07-20 18:58:03.303449] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:53.135 [2024-07-20 18:58:03.303464] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:53.135 [2024-07-20 18:58:03.303471] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:53.135 [2024-07-20 18:58:03.303478] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b29980): datao=0, datal=4096, cccid=0 00:27:53.135 [2024-07-20 18:58:03.303485] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b914c0) on tqpair(0x1b29980): expected_datao=0, payload_size=4096 00:27:53.135 [2024-07-20 18:58:03.303493] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.303581] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.303591] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.303780] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.136 [2024-07-20 18:58:03.303791] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.136 [2024-07-20 18:58:03.303805] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.303812] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b914c0) on tqpair=0x1b29980 00:27:53.136 [2024-07-20 18:58:03.303828] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:53.136 [2024-07-20 18:58:03.303838] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:53.136 [2024-07-20 18:58:03.303845] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:53.136 [2024-07-20 18:58:03.303852] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:53.136 [2024-07-20 18:58:03.303863] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:53.136 [2024-07-20 18:58:03.303871] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:53.136 [2024-07-20 18:58:03.303886] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:53.136 [2024-07-20 18:58:03.303897] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.303905] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.303911] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b29980) 00:27:53.136 [2024-07-20 18:58:03.303922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:53.136 [2024-07-20 18:58:03.303959] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b914c0, cid 0, qid 0 00:27:53.136 [2024-07-20 18:58:03.304236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.136 [2024-07-20 18:58:03.304249] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.136 [2024-07-20 18:58:03.304255] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.304262] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b914c0) on tqpair=0x1b29980 00:27:53.136 [2024-07-20 18:58:03.304274] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.304282] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.304288] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b29980) 00:27:53.136 [2024-07-20 18:58:03.304298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.136 [2024-07-20 18:58:03.304308] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.304315] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.304322] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b29980) 00:27:53.136 [2024-07-20 18:58:03.304331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.136 [2024-07-20 18:58:03.304341] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.304348] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.304354] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b29980) 00:27:53.136 [2024-07-20 18:58:03.304363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.136 [2024-07-20 18:58:03.304372] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.304379] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.304386] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b29980) 00:27:53.136 [2024-07-20 18:58:03.304394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.136 [2024-07-20 18:58:03.304418] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:53.136 [2024-07-20 18:58:03.304436] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:53.136 [2024-07-20 18:58:03.304449] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.304456] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b29980) 00:27:53.136 [2024-07-20 18:58:03.304466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.136 [2024-07-20 18:58:03.304490] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b914c0, cid 0, qid 0 00:27:53.136 [2024-07-20 18:58:03.304518] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91620, cid 1, qid 0 00:27:53.136 [2024-07-20 18:58:03.304526] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91780, cid 2, qid 0 00:27:53.136 [2024-07-20 18:58:03.304533] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b918e0, cid 3, qid 0 00:27:53.136 [2024-07-20 18:58:03.304541] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91a40, cid 4, qid 0 00:27:53.136 [2024-07-20 18:58:03.308821] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.136 [2024-07-20 18:58:03.308837] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.136 [2024-07-20 18:58:03.308844] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.308851] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b91a40) on tqpair=0x1b29980 00:27:53.136 [2024-07-20 18:58:03.308860] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:53.136 [2024-07-20 18:58:03.308869] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:53.136 [2024-07-20 18:58:03.308882] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:53.136 [2024-07-20 18:58:03.308907] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:53.136 [2024-07-20 18:58:03.308918] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.308926] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.308932] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b29980) 00:27:53.136 [2024-07-20 18:58:03.308943] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:53.136 [2024-07-20 18:58:03.308965] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91a40, cid 4, qid 0 00:27:53.136 [2024-07-20 18:58:03.309204] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.136 [2024-07-20 18:58:03.309220] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.136 [2024-07-20 18:58:03.309227] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.309233] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b91a40) on tqpair=0x1b29980 00:27:53.136 [2024-07-20 18:58:03.309303] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:53.136 [2024-07-20 18:58:03.309322] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:53.136 [2024-07-20 18:58:03.309336] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.309344] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b29980) 00:27:53.136 [2024-07-20 18:58:03.309370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.136 [2024-07-20 18:58:03.309392] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91a40, cid 4, qid 0 00:27:53.136 [2024-07-20 18:58:03.309680] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:53.136 [2024-07-20 18:58:03.309693] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:53.136 [2024-07-20 18:58:03.309700] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.309706] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b29980): datao=0, datal=4096, cccid=4 00:27:53.136 [2024-07-20 18:58:03.309714] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b91a40) on tqpair(0x1b29980): expected_datao=0, payload_size=4096 00:27:53.136 [2024-07-20 18:58:03.309727] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.309815] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.309826] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.354818] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.136 [2024-07-20 18:58:03.354836] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.136 [2024-07-20 18:58:03.354843] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.354850] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b91a40) on tqpair=0x1b29980 00:27:53.136 [2024-07-20 18:58:03.354866] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:53.136 [2024-07-20 18:58:03.354883] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:53.136 [2024-07-20 18:58:03.354900] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:53.136 [2024-07-20 18:58:03.354929] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.354937] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b29980) 00:27:53.136 [2024-07-20 18:58:03.354949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.136 [2024-07-20 18:58:03.354972] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91a40, cid 4, qid 0 00:27:53.136 [2024-07-20 18:58:03.355230] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:53.136 [2024-07-20 18:58:03.355243] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:53.136 [2024-07-20 18:58:03.355250] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.355256] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b29980): datao=0, datal=4096, cccid=4 00:27:53.136 [2024-07-20 18:58:03.355264] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b91a40) on tqpair(0x1b29980): expected_datao=0, payload_size=4096 00:27:53.136 [2024-07-20 18:58:03.355271] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.355361] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.355370] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.396029] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.136 [2024-07-20 18:58:03.396047] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.136 [2024-07-20 18:58:03.396054] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.136 [2024-07-20 18:58:03.396061] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b91a40) on tqpair=0x1b29980 00:27:53.136 [2024-07-20 18:58:03.396083] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:53.137 [2024-07-20 18:58:03.396102] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:53.137 [2024-07-20 18:58:03.396116] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.396124] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b29980) 00:27:53.137 [2024-07-20 18:58:03.396135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.137 [2024-07-20 18:58:03.396158] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91a40, cid 4, qid 0 00:27:53.137 [2024-07-20 18:58:03.396374] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:53.137 [2024-07-20 18:58:03.396389] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:53.137 [2024-07-20 18:58:03.396400] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.396407] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b29980): datao=0, datal=4096, cccid=4 00:27:53.137 [2024-07-20 18:58:03.396415] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b91a40) on tqpair(0x1b29980): expected_datao=0, payload_size=4096 00:27:53.137 [2024-07-20 18:58:03.396423] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.396503] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.396513] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.396702] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.137 [2024-07-20 18:58:03.396717] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.137 [2024-07-20 18:58:03.396723] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.396730] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b91a40) on tqpair=0x1b29980 00:27:53.137 [2024-07-20 18:58:03.396745] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:53.137 [2024-07-20 18:58:03.396760] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:53.137 [2024-07-20 18:58:03.396775] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:53.137 [2024-07-20 18:58:03.396785] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:53.137 [2024-07-20 18:58:03.396804] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:53.137 [2024-07-20 18:58:03.396813] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:53.137 [2024-07-20 18:58:03.396822] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:53.137 [2024-07-20 18:58:03.396830] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:53.137 [2024-07-20 18:58:03.396852] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.396862] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b29980) 00:27:53.137 [2024-07-20 18:58:03.396873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.137 [2024-07-20 18:58:03.396884] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.396892] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.396898] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b29980) 00:27:53.137 [2024-07-20 18:58:03.396907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.137 [2024-07-20 18:58:03.396933] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91a40, cid 4, qid 0 00:27:53.137 [2024-07-20 18:58:03.396945] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91ba0, cid 5, qid 0 00:27:53.137 [2024-07-20 18:58:03.397192] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.137 [2024-07-20 18:58:03.397208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.137 [2024-07-20 18:58:03.397215] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.397221] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b91a40) on tqpair=0x1b29980 00:27:53.137 [2024-07-20 18:58:03.397233] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.137 [2024-07-20 18:58:03.397242] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.137 [2024-07-20 18:58:03.397252] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.397260] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b91ba0) on tqpair=0x1b29980 00:27:53.137 [2024-07-20 18:58:03.397278] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.397287] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b29980) 00:27:53.137 [2024-07-20 18:58:03.397298] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.137 [2024-07-20 18:58:03.397319] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91ba0, cid 5, qid 0 00:27:53.137 [2024-07-20 18:58:03.397560] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.137 [2024-07-20 18:58:03.397572] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.137 [2024-07-20 18:58:03.397579] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.397586] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b91ba0) on tqpair=0x1b29980 00:27:53.137 [2024-07-20 18:58:03.397602] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.397611] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b29980) 00:27:53.137 [2024-07-20 18:58:03.397622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.137 [2024-07-20 18:58:03.397642] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91ba0, cid 5, qid 0 00:27:53.137 [2024-07-20 18:58:03.397886] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.137 [2024-07-20 18:58:03.397902] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.137 [2024-07-20 18:58:03.397909] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.397915] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b91ba0) on tqpair=0x1b29980 00:27:53.137 [2024-07-20 18:58:03.397933] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.397942] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b29980) 00:27:53.137 [2024-07-20 18:58:03.397952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.137 [2024-07-20 18:58:03.397973] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91ba0, cid 5, qid 0 00:27:53.137 [2024-07-20 18:58:03.398198] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.137 [2024-07-20 18:58:03.398211] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.137 [2024-07-20 18:58:03.398218] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.398224] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b91ba0) on tqpair=0x1b29980 00:27:53.137 [2024-07-20 18:58:03.398244] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.398254] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b29980) 00:27:53.137 [2024-07-20 18:58:03.398265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.137 [2024-07-20 18:58:03.398276] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.398284] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b29980) 00:27:53.137 [2024-07-20 18:58:03.398293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.137 [2024-07-20 18:58:03.398305] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.137 [2024-07-20 18:58:03.398312] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1b29980) 00:27:53.137 [2024-07-20 18:58:03.398325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.137 [2024-07-20 18:58:03.398337] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.398345] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b29980) 00:27:53.138 [2024-07-20 18:58:03.398354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.138 [2024-07-20 18:58:03.398391] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91ba0, cid 5, qid 0 00:27:53.138 [2024-07-20 18:58:03.398402] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91a40, cid 4, qid 0 00:27:53.138 [2024-07-20 18:58:03.398410] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91d00, cid 6, qid 0 00:27:53.138 [2024-07-20 18:58:03.398417] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91e60, cid 7, qid 0 00:27:53.138 [2024-07-20 18:58:03.398760] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:53.138 [2024-07-20 18:58:03.398773] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:53.138 [2024-07-20 18:58:03.398780] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.398786] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b29980): datao=0, datal=8192, cccid=5 00:27:53.138 [2024-07-20 18:58:03.402803] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b91ba0) on tqpair(0x1b29980): expected_datao=0, payload_size=8192 00:27:53.138 [2024-07-20 18:58:03.402815] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.402839] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.402849] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.402857] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:53.138 [2024-07-20 18:58:03.402866] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:53.138 [2024-07-20 18:58:03.402873] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.402879] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b29980): datao=0, datal=512, cccid=4 00:27:53.138 [2024-07-20 18:58:03.402887] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b91a40) on tqpair(0x1b29980): expected_datao=0, payload_size=512 00:27:53.138 [2024-07-20 18:58:03.402894] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.402903] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.402911] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.402919] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:53.138 [2024-07-20 18:58:03.402927] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:53.138 [2024-07-20 18:58:03.402934] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.402940] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b29980): datao=0, datal=512, cccid=6 00:27:53.138 [2024-07-20 18:58:03.402947] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b91d00) on tqpair(0x1b29980): expected_datao=0, payload_size=512 00:27:53.138 [2024-07-20 18:58:03.402954] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.402963] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.402970] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.402978] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:53.138 [2024-07-20 18:58:03.402987] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:53.138 [2024-07-20 18:58:03.402993] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.402999] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b29980): datao=0, datal=4096, cccid=7 00:27:53.138 [2024-07-20 18:58:03.403010] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b91e60) on tqpair(0x1b29980): expected_datao=0, payload_size=4096 00:27:53.138 [2024-07-20 18:58:03.403018] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.403027] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.403034] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.403042] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.138 [2024-07-20 18:58:03.403051] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.138 [2024-07-20 18:58:03.403057] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.403064] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b91ba0) on tqpair=0x1b29980 00:27:53.138 [2024-07-20 18:58:03.403098] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.138 [2024-07-20 18:58:03.403109] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.138 [2024-07-20 18:58:03.403115] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.403121] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b91a40) on tqpair=0x1b29980 00:27:53.138 [2024-07-20 18:58:03.403136] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.138 [2024-07-20 18:58:03.403145] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.138 [2024-07-20 18:58:03.403152] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.403158] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b91d00) on tqpair=0x1b29980 00:27:53.138 [2024-07-20 18:58:03.403172] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.138 [2024-07-20 18:58:03.403181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.138 [2024-07-20 18:58:03.403188] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.138 [2024-07-20 18:58:03.403194] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b91e60) on tqpair=0x1b29980 00:27:53.138 ===================================================== 00:27:53.138 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.138 ===================================================== 00:27:53.138 Controller Capabilities/Features 00:27:53.138 ================================ 00:27:53.138 Vendor ID: 8086 00:27:53.138 Subsystem Vendor ID: 8086 00:27:53.138 Serial Number: SPDK00000000000001 00:27:53.138 Model Number: SPDK bdev Controller 00:27:53.138 Firmware Version: 24.05.1 00:27:53.138 Recommended Arb Burst: 6 00:27:53.138 IEEE OUI Identifier: e4 d2 5c 00:27:53.138 Multi-path I/O 00:27:53.138 May have multiple subsystem ports: Yes 00:27:53.138 May have multiple controllers: Yes 00:27:53.138 Associated with SR-IOV VF: No 00:27:53.138 Max Data Transfer Size: 131072 00:27:53.138 Max Number of Namespaces: 32 00:27:53.138 Max Number of I/O Queues: 127 00:27:53.138 NVMe Specification Version (VS): 1.3 00:27:53.138 NVMe Specification Version (Identify): 1.3 00:27:53.138 Maximum Queue Entries: 128 00:27:53.138 Contiguous Queues Required: Yes 00:27:53.138 Arbitration Mechanisms Supported 00:27:53.138 Weighted Round Robin: Not Supported 00:27:53.138 Vendor Specific: Not Supported 00:27:53.138 Reset Timeout: 15000 ms 00:27:53.138 Doorbell Stride: 4 bytes 00:27:53.138 NVM Subsystem Reset: Not Supported 00:27:53.138 Command Sets Supported 00:27:53.138 NVM Command Set: Supported 00:27:53.138 Boot Partition: Not Supported 00:27:53.138 Memory Page Size Minimum: 4096 bytes 00:27:53.138 Memory Page Size Maximum: 4096 bytes 00:27:53.138 Persistent Memory Region: Not Supported 00:27:53.138 Optional Asynchronous Events Supported 00:27:53.138 Namespace Attribute Notices: Supported 00:27:53.138 Firmware Activation Notices: Not Supported 00:27:53.138 ANA Change Notices: Not Supported 00:27:53.138 PLE Aggregate Log Change Notices: Not Supported 00:27:53.138 LBA Status Info Alert Notices: Not Supported 00:27:53.138 EGE Aggregate Log Change Notices: Not Supported 00:27:53.138 Normal NVM Subsystem Shutdown event: Not Supported 00:27:53.138 Zone Descriptor Change Notices: Not Supported 00:27:53.138 Discovery Log Change Notices: Not Supported 00:27:53.138 Controller Attributes 00:27:53.138 128-bit Host Identifier: Supported 00:27:53.138 Non-Operational Permissive Mode: Not Supported 00:27:53.138 NVM Sets: Not Supported 00:27:53.138 Read Recovery Levels: Not Supported 00:27:53.138 Endurance Groups: Not Supported 00:27:53.138 Predictable Latency Mode: Not Supported 00:27:53.138 Traffic Based Keep ALive: Not Supported 00:27:53.138 Namespace Granularity: Not Supported 00:27:53.138 SQ Associations: Not Supported 00:27:53.138 UUID List: Not Supported 00:27:53.138 Multi-Domain Subsystem: Not Supported 00:27:53.138 Fixed Capacity Management: Not Supported 00:27:53.138 Variable Capacity Management: Not Supported 00:27:53.138 Delete Endurance Group: Not Supported 00:27:53.138 Delete NVM Set: Not Supported 00:27:53.138 Extended LBA Formats Supported: Not Supported 00:27:53.138 Flexible Data Placement Supported: Not Supported 00:27:53.138 00:27:53.138 Controller Memory Buffer Support 00:27:53.138 ================================ 00:27:53.138 Supported: No 00:27:53.138 00:27:53.138 Persistent Memory Region Support 00:27:53.138 ================================ 00:27:53.138 Supported: No 00:27:53.138 00:27:53.138 Admin Command Set Attributes 00:27:53.138 ============================ 00:27:53.138 Security Send/Receive: Not Supported 00:27:53.138 Format NVM: Not Supported 00:27:53.138 Firmware Activate/Download: Not Supported 00:27:53.138 Namespace Management: Not Supported 00:27:53.138 Device Self-Test: Not Supported 00:27:53.138 Directives: Not Supported 00:27:53.138 NVMe-MI: Not Supported 00:27:53.138 Virtualization Management: Not Supported 00:27:53.138 Doorbell Buffer Config: Not Supported 00:27:53.138 Get LBA Status Capability: Not Supported 00:27:53.138 Command & Feature Lockdown Capability: Not Supported 00:27:53.138 Abort Command Limit: 4 00:27:53.138 Async Event Request Limit: 4 00:27:53.138 Number of Firmware Slots: N/A 00:27:53.138 Firmware Slot 1 Read-Only: N/A 00:27:53.138 Firmware Activation Without Reset: N/A 00:27:53.138 Multiple Update Detection Support: N/A 00:27:53.138 Firmware Update Granularity: No Information Provided 00:27:53.138 Per-Namespace SMART Log: No 00:27:53.138 Asymmetric Namespace Access Log Page: Not Supported 00:27:53.138 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:53.138 Command Effects Log Page: Supported 00:27:53.138 Get Log Page Extended Data: Supported 00:27:53.138 Telemetry Log Pages: Not Supported 00:27:53.138 Persistent Event Log Pages: Not Supported 00:27:53.138 Supported Log Pages Log Page: May Support 00:27:53.138 Commands Supported & Effects Log Page: Not Supported 00:27:53.139 Feature Identifiers & Effects Log Page:May Support 00:27:53.139 NVMe-MI Commands & Effects Log Page: May Support 00:27:53.139 Data Area 4 for Telemetry Log: Not Supported 00:27:53.139 Error Log Page Entries Supported: 128 00:27:53.139 Keep Alive: Supported 00:27:53.139 Keep Alive Granularity: 10000 ms 00:27:53.139 00:27:53.139 NVM Command Set Attributes 00:27:53.139 ========================== 00:27:53.139 Submission Queue Entry Size 00:27:53.139 Max: 64 00:27:53.139 Min: 64 00:27:53.139 Completion Queue Entry Size 00:27:53.139 Max: 16 00:27:53.139 Min: 16 00:27:53.139 Number of Namespaces: 32 00:27:53.139 Compare Command: Supported 00:27:53.139 Write Uncorrectable Command: Not Supported 00:27:53.139 Dataset Management Command: Supported 00:27:53.139 Write Zeroes Command: Supported 00:27:53.139 Set Features Save Field: Not Supported 00:27:53.139 Reservations: Supported 00:27:53.139 Timestamp: Not Supported 00:27:53.139 Copy: Supported 00:27:53.139 Volatile Write Cache: Present 00:27:53.139 Atomic Write Unit (Normal): 1 00:27:53.139 Atomic Write Unit (PFail): 1 00:27:53.139 Atomic Compare & Write Unit: 1 00:27:53.139 Fused Compare & Write: Supported 00:27:53.139 Scatter-Gather List 00:27:53.139 SGL Command Set: Supported 00:27:53.139 SGL Keyed: Supported 00:27:53.139 SGL Bit Bucket Descriptor: Not Supported 00:27:53.139 SGL Metadata Pointer: Not Supported 00:27:53.139 Oversized SGL: Not Supported 00:27:53.139 SGL Metadata Address: Not Supported 00:27:53.139 SGL Offset: Supported 00:27:53.139 Transport SGL Data Block: Not Supported 00:27:53.139 Replay Protected Memory Block: Not Supported 00:27:53.139 00:27:53.139 Firmware Slot Information 00:27:53.139 ========================= 00:27:53.139 Active slot: 1 00:27:53.139 Slot 1 Firmware Revision: 24.05.1 00:27:53.139 00:27:53.139 00:27:53.139 Commands Supported and Effects 00:27:53.139 ============================== 00:27:53.139 Admin Commands 00:27:53.139 -------------- 00:27:53.139 Get Log Page (02h): Supported 00:27:53.139 Identify (06h): Supported 00:27:53.139 Abort (08h): Supported 00:27:53.139 Set Features (09h): Supported 00:27:53.139 Get Features (0Ah): Supported 00:27:53.139 Asynchronous Event Request (0Ch): Supported 00:27:53.139 Keep Alive (18h): Supported 00:27:53.139 I/O Commands 00:27:53.139 ------------ 00:27:53.139 Flush (00h): Supported LBA-Change 00:27:53.139 Write (01h): Supported LBA-Change 00:27:53.139 Read (02h): Supported 00:27:53.139 Compare (05h): Supported 00:27:53.139 Write Zeroes (08h): Supported LBA-Change 00:27:53.139 Dataset Management (09h): Supported LBA-Change 00:27:53.139 Copy (19h): Supported LBA-Change 00:27:53.139 Unknown (79h): Supported LBA-Change 00:27:53.139 Unknown (7Ah): Supported 00:27:53.139 00:27:53.139 Error Log 00:27:53.139 ========= 00:27:53.139 00:27:53.139 Arbitration 00:27:53.139 =========== 00:27:53.139 Arbitration Burst: 1 00:27:53.139 00:27:53.139 Power Management 00:27:53.139 ================ 00:27:53.139 Number of Power States: 1 00:27:53.139 Current Power State: Power State #0 00:27:53.139 Power State #0: 00:27:53.139 Max Power: 0.00 W 00:27:53.139 Non-Operational State: Operational 00:27:53.139 Entry Latency: Not Reported 00:27:53.139 Exit Latency: Not Reported 00:27:53.139 Relative Read Throughput: 0 00:27:53.139 Relative Read Latency: 0 00:27:53.139 Relative Write Throughput: 0 00:27:53.139 Relative Write Latency: 0 00:27:53.139 Idle Power: Not Reported 00:27:53.139 Active Power: Not Reported 00:27:53.139 Non-Operational Permissive Mode: Not Supported 00:27:53.139 00:27:53.139 Health Information 00:27:53.139 ================== 00:27:53.139 Critical Warnings: 00:27:53.139 Available Spare Space: OK 00:27:53.139 Temperature: OK 00:27:53.139 Device Reliability: OK 00:27:53.139 Read Only: No 00:27:53.139 Volatile Memory Backup: OK 00:27:53.139 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:53.139 Temperature Threshold: [2024-07-20 18:58:03.403323] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.403335] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b29980) 00:27:53.139 [2024-07-20 18:58:03.403346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.139 [2024-07-20 18:58:03.403370] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b91e60, cid 7, qid 0 00:27:53.139 [2024-07-20 18:58:03.403628] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.139 [2024-07-20 18:58:03.403640] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.139 [2024-07-20 18:58:03.403647] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.403654] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b91e60) on tqpair=0x1b29980 00:27:53.139 [2024-07-20 18:58:03.403691] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:53.139 [2024-07-20 18:58:03.403712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.139 [2024-07-20 18:58:03.403724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.139 [2024-07-20 18:58:03.403734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.139 [2024-07-20 18:58:03.403744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.139 [2024-07-20 18:58:03.403756] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.403764] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.403771] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b29980) 00:27:53.139 [2024-07-20 18:58:03.403785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.139 [2024-07-20 18:58:03.403817] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b918e0, cid 3, qid 0 00:27:53.139 [2024-07-20 18:58:03.404050] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.139 [2024-07-20 18:58:03.404063] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.139 [2024-07-20 18:58:03.404069] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.404076] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b918e0) on tqpair=0x1b29980 00:27:53.139 [2024-07-20 18:58:03.404088] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.404096] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.404102] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b29980) 00:27:53.139 [2024-07-20 18:58:03.404113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.139 [2024-07-20 18:58:03.404139] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b918e0, cid 3, qid 0 00:27:53.139 [2024-07-20 18:58:03.404383] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.139 [2024-07-20 18:58:03.404395] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.139 [2024-07-20 18:58:03.404402] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.404408] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b918e0) on tqpair=0x1b29980 00:27:53.139 [2024-07-20 18:58:03.404417] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:53.139 [2024-07-20 18:58:03.404425] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:53.139 [2024-07-20 18:58:03.404441] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.404450] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.404457] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b29980) 00:27:53.139 [2024-07-20 18:58:03.404467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.139 [2024-07-20 18:58:03.404487] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b918e0, cid 3, qid 0 00:27:53.139 [2024-07-20 18:58:03.404729] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.139 [2024-07-20 18:58:03.404745] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.139 [2024-07-20 18:58:03.404751] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.404758] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b918e0) on tqpair=0x1b29980 00:27:53.139 [2024-07-20 18:58:03.404776] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.404785] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.404798] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b29980) 00:27:53.139 [2024-07-20 18:58:03.404810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.139 [2024-07-20 18:58:03.404832] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b918e0, cid 3, qid 0 00:27:53.139 [2024-07-20 18:58:03.405071] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.139 [2024-07-20 18:58:03.405086] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.139 [2024-07-20 18:58:03.405092] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.405099] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b918e0) on tqpair=0x1b29980 00:27:53.139 [2024-07-20 18:58:03.405117] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.405130] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.139 [2024-07-20 18:58:03.405137] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b29980) 00:27:53.139 [2024-07-20 18:58:03.405148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.139 [2024-07-20 18:58:03.405168] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b918e0, cid 3, qid 0 00:27:53.139 [2024-07-20 18:58:03.405398] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.140 [2024-07-20 18:58:03.405413] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.140 [2024-07-20 18:58:03.405420] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.140 [2024-07-20 18:58:03.405427] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b918e0) on tqpair=0x1b29980 00:27:53.140 [2024-07-20 18:58:03.405444] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.140 [2024-07-20 18:58:03.405454] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.140 [2024-07-20 18:58:03.405460] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b29980) 00:27:53.140 [2024-07-20 18:58:03.405470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.140 [2024-07-20 18:58:03.405491] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b918e0, cid 3, qid 0 00:27:53.140 [2024-07-20 18:58:03.405719] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.140 [2024-07-20 18:58:03.405731] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.140 [2024-07-20 18:58:03.405737] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.140 [2024-07-20 18:58:03.405744] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b918e0) on tqpair=0x1b29980 00:27:53.140 [2024-07-20 18:58:03.405761] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.140 [2024-07-20 18:58:03.405769] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.140 [2024-07-20 18:58:03.405776] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b29980) 00:27:53.140 [2024-07-20 18:58:03.405787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.140 [2024-07-20 18:58:03.405815] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b918e0, cid 3, qid 0 00:27:53.140 [2024-07-20 18:58:03.406052] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.140 [2024-07-20 18:58:03.406067] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.140 [2024-07-20 18:58:03.406074] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.140 [2024-07-20 18:58:03.406081] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b918e0) on tqpair=0x1b29980 00:27:53.140 [2024-07-20 18:58:03.406098] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.140 [2024-07-20 18:58:03.406107] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.140 [2024-07-20 18:58:03.406114] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b29980) 00:27:53.140 [2024-07-20 18:58:03.406124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.140 [2024-07-20 18:58:03.406145] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b918e0, cid 3, qid 0 00:27:53.140 [2024-07-20 18:58:03.409817] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.140 [2024-07-20 18:58:03.409834] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.140 [2024-07-20 18:58:03.409841] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.140 [2024-07-20 18:58:03.409848] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b918e0) on tqpair=0x1b29980 00:27:53.140 [2024-07-20 18:58:03.409881] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:53.140 [2024-07-20 18:58:03.409891] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:53.140 [2024-07-20 18:58:03.409901] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b29980) 00:27:53.140 [2024-07-20 18:58:03.409913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.140 [2024-07-20 18:58:03.409936] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b918e0, cid 3, qid 0 00:27:53.140 [2024-07-20 18:58:03.410174] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:53.140 [2024-07-20 18:58:03.410186] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:53.140 [2024-07-20 18:58:03.410193] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:53.140 [2024-07-20 18:58:03.410200] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b918e0) on tqpair=0x1b29980 00:27:53.140 [2024-07-20 18:58:03.410214] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:27:53.140 0 Kelvin (-273 Celsius) 00:27:53.140 Available Spare: 0% 00:27:53.140 Available Spare Threshold: 0% 00:27:53.140 Life Percentage Used: 0% 00:27:53.140 Data Units Read: 0 00:27:53.140 Data Units Written: 0 00:27:53.140 Host Read Commands: 0 00:27:53.140 Host Write Commands: 0 00:27:53.140 Controller Busy Time: 0 minutes 00:27:53.140 Power Cycles: 0 00:27:53.140 Power On Hours: 0 hours 00:27:53.140 Unsafe Shutdowns: 0 00:27:53.140 Unrecoverable Media Errors: 0 00:27:53.140 Lifetime Error Log Entries: 0 00:27:53.140 Warning Temperature Time: 0 minutes 00:27:53.140 Critical Temperature Time: 0 minutes 00:27:53.140 00:27:53.140 Number of Queues 00:27:53.140 ================ 00:27:53.140 Number of I/O Submission Queues: 127 00:27:53.140 Number of I/O Completion Queues: 127 00:27:53.140 00:27:53.140 Active Namespaces 00:27:53.140 ================= 00:27:53.140 Namespace ID:1 00:27:53.140 Error Recovery Timeout: Unlimited 00:27:53.140 Command Set Identifier: NVM (00h) 00:27:53.140 Deallocate: Supported 00:27:53.140 Deallocated/Unwritten Error: Not Supported 00:27:53.140 Deallocated Read Value: Unknown 00:27:53.140 Deallocate in Write Zeroes: Not Supported 00:27:53.140 Deallocated Guard Field: 0xFFFF 00:27:53.140 Flush: Supported 00:27:53.140 Reservation: Supported 00:27:53.140 Namespace Sharing Capabilities: Multiple Controllers 00:27:53.140 Size (in LBAs): 131072 (0GiB) 00:27:53.140 Capacity (in LBAs): 131072 (0GiB) 00:27:53.140 Utilization (in LBAs): 131072 (0GiB) 00:27:53.140 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:53.140 EUI64: ABCDEF0123456789 00:27:53.140 UUID: d965a9f1-32c5-4650-a182-8e5446efa84d 00:27:53.140 Thin Provisioning: Not Supported 00:27:53.140 Per-NS Atomic Units: Yes 00:27:53.140 Atomic Boundary Size (Normal): 0 00:27:53.140 Atomic Boundary Size (PFail): 0 00:27:53.140 Atomic Boundary Offset: 0 00:27:53.140 Maximum Single Source Range Length: 65535 00:27:53.140 Maximum Copy Length: 65535 00:27:53.140 Maximum Source Range Count: 1 00:27:53.140 NGUID/EUI64 Never Reused: No 00:27:53.140 Namespace Write Protected: No 00:27:53.140 Number of LBA Formats: 1 00:27:53.140 Current LBA Format: LBA Format #00 00:27:53.140 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:53.140 00:27:53.140 18:58:03 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:53.140 18:58:03 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:53.140 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.140 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:53.140 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.140 18:58:03 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:53.140 18:58:03 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:53.140 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:53.140 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:53.140 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:53.140 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:53.140 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:53.140 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:53.140 rmmod nvme_tcp 00:27:53.398 rmmod nvme_fabrics 00:27:53.398 rmmod nvme_keyring 00:27:53.398 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:53.398 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:53.398 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:53.398 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1480913 ']' 00:27:53.398 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1480913 00:27:53.398 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 1480913 ']' 00:27:53.398 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 1480913 00:27:53.398 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:27:53.398 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:53.398 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1480913 00:27:53.398 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:53.398 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:53.398 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1480913' 00:27:53.398 killing process with pid 1480913 00:27:53.398 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 1480913 00:27:53.398 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 1480913 00:27:53.656 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:53.656 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:53.656 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:53.656 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:53.656 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:53.656 18:58:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.656 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.656 18:58:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.557 18:58:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:55.557 00:27:55.558 real 0m5.375s 00:27:55.558 user 0m4.410s 00:27:55.558 sys 0m1.903s 00:27:55.558 18:58:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:55.558 18:58:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:55.558 ************************************ 00:27:55.558 END TEST nvmf_identify 00:27:55.558 ************************************ 00:27:55.558 18:58:05 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:55.558 18:58:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:55.558 18:58:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:55.558 18:58:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.558 ************************************ 00:27:55.558 START TEST nvmf_perf 00:27:55.558 ************************************ 00:27:55.558 18:58:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:55.816 * Looking for test storage... 00:27:55.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.816 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:55.817 18:58:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:57.719 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.719 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:57.719 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:57.719 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:57.719 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:57.719 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:57.720 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:57.720 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:57.720 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:57.720 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:57.720 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:57.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:57.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:27:57.979 00:27:57.979 --- 10.0.0.2 ping statistics --- 00:27:57.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.979 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:57.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:57.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:27:57.979 00:27:57.979 --- 10.0.0.1 ping statistics --- 00:27:57.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.979 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1482989 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1482989 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 1482989 ']' 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:57.979 18:58:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:57.979 [2024-07-20 18:58:08.231532] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:57.979 [2024-07-20 18:58:08.231618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.979 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.979 [2024-07-20 18:58:08.297256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:58.238 [2024-07-20 18:58:08.386867] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.238 [2024-07-20 18:58:08.386923] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.238 [2024-07-20 18:58:08.386938] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.238 [2024-07-20 18:58:08.386949] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.238 [2024-07-20 18:58:08.386960] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.238 [2024-07-20 18:58:08.387024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.238 [2024-07-20 18:58:08.387062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.238 [2024-07-20 18:58:08.387119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.238 [2024-07-20 18:58:08.387122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.238 18:58:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:58.238 18:58:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:27:58.238 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:58.238 18:58:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:58.238 18:58:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:58.238 18:58:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.238 18:58:08 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:58.238 18:58:08 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:01.519 18:58:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:01.519 18:58:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:01.787 18:58:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:01.787 18:58:11 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:02.043 18:58:12 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:02.043 18:58:12 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:02.043 18:58:12 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:02.043 18:58:12 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:02.043 18:58:12 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:02.043 [2024-07-20 18:58:12.360213] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.299 18:58:12 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:02.299 18:58:12 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:02.555 18:58:12 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:02.555 18:58:12 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:02.555 18:58:12 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:02.815 18:58:13 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:03.087 [2024-07-20 18:58:13.351885] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.087 18:58:13 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:03.344 18:58:13 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:03.344 18:58:13 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:03.344 18:58:13 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:03.344 18:58:13 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:04.708 Initializing NVMe Controllers 00:28:04.708 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:04.708 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:04.708 Initialization complete. Launching workers. 00:28:04.708 ======================================================== 00:28:04.708 Latency(us) 00:28:04.708 Device Information : IOPS MiB/s Average min max 00:28:04.708 PCIE (0000:88:00.0) NSID 1 from core 0: 84280.08 329.22 379.33 44.19 4381.87 00:28:04.708 ======================================================== 00:28:04.708 Total : 84280.08 329.22 379.33 44.19 4381.87 00:28:04.708 00:28:04.708 18:58:14 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:04.708 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.076 Initializing NVMe Controllers 00:28:06.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:06.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:06.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:06.076 Initialization complete. Launching workers. 00:28:06.076 ======================================================== 00:28:06.076 Latency(us) 00:28:06.076 Device Information : IOPS MiB/s Average min max 00:28:06.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 100.00 0.39 10396.36 320.59 45794.63 00:28:06.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 63.00 0.25 16213.24 7949.49 47899.44 00:28:06.076 ======================================================== 00:28:06.076 Total : 163.00 0.64 12644.60 320.59 47899.44 00:28:06.076 00:28:06.076 18:58:16 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:06.076 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.453 Initializing NVMe Controllers 00:28:07.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:07.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:07.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:07.453 Initialization complete. Launching workers. 00:28:07.453 ======================================================== 00:28:07.453 Latency(us) 00:28:07.453 Device Information : IOPS MiB/s Average min max 00:28:07.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7217.99 28.20 4433.82 803.69 8812.44 00:28:07.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3889.00 15.19 8311.21 4845.22 15806.83 00:28:07.453 ======================================================== 00:28:07.453 Total : 11106.99 43.39 5791.44 803.69 15806.83 00:28:07.453 00:28:07.453 18:58:17 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:07.453 18:58:17 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:07.453 18:58:17 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:07.453 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.977 Initializing NVMe Controllers 00:28:09.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:09.977 Controller IO queue size 128, less than required. 00:28:09.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.977 Controller IO queue size 128, less than required. 00:28:09.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:09.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:09.977 Initialization complete. Launching workers. 00:28:09.977 ======================================================== 00:28:09.977 Latency(us) 00:28:09.977 Device Information : IOPS MiB/s Average min max 00:28:09.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 647.93 161.98 203056.90 123428.36 330859.53 00:28:09.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 590.43 147.61 228933.97 115295.92 341128.14 00:28:09.977 ======================================================== 00:28:09.977 Total : 1238.36 309.59 215394.74 115295.92 341128.14 00:28:09.977 00:28:09.977 18:58:20 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:09.977 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.977 No valid NVMe controllers or AIO or URING devices found 00:28:09.977 Initializing NVMe Controllers 00:28:09.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:09.977 Controller IO queue size 128, less than required. 00:28:09.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.977 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:09.977 Controller IO queue size 128, less than required. 00:28:09.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.977 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:09.977 WARNING: Some requested NVMe devices were skipped 00:28:09.977 18:58:20 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:09.977 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.257 Initializing NVMe Controllers 00:28:13.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:13.257 Controller IO queue size 128, less than required. 00:28:13.257 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:13.257 Controller IO queue size 128, less than required. 00:28:13.257 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:13.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:13.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:13.257 Initialization complete. Launching workers. 00:28:13.257 00:28:13.257 ==================== 00:28:13.257 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:13.257 TCP transport: 00:28:13.257 polls: 45436 00:28:13.257 idle_polls: 16419 00:28:13.257 sock_completions: 29017 00:28:13.257 nvme_completions: 2449 00:28:13.257 submitted_requests: 3642 00:28:13.257 queued_requests: 1 00:28:13.257 00:28:13.257 ==================== 00:28:13.257 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:13.257 TCP transport: 00:28:13.257 polls: 47844 00:28:13.257 idle_polls: 28190 00:28:13.257 sock_completions: 19654 00:28:13.257 nvme_completions: 1137 00:28:13.257 submitted_requests: 1694 00:28:13.257 queued_requests: 1 00:28:13.257 ======================================================== 00:28:13.257 Latency(us) 00:28:13.257 Device Information : IOPS MiB/s Average min max 00:28:13.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 611.32 152.83 222845.71 118769.65 339941.28 00:28:13.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 283.68 70.92 480185.83 142882.54 808344.84 00:28:13.257 ======================================================== 00:28:13.257 Total : 895.00 223.75 304413.34 118769.65 808344.84 00:28:13.257 00:28:13.257 18:58:23 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:13.257 18:58:23 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:13.257 18:58:23 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:13.257 18:58:23 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:13.257 18:58:23 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:16.533 18:58:26 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=b864943b-79d0-4bfd-a559-d10ee505afb0 00:28:16.533 18:58:26 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb b864943b-79d0-4bfd-a559-d10ee505afb0 00:28:16.533 18:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=b864943b-79d0-4bfd-a559-d10ee505afb0 00:28:16.533 18:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:16.533 18:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:16.533 18:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:16.533 18:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:16.790 18:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:16.790 { 00:28:16.790 "uuid": "b864943b-79d0-4bfd-a559-d10ee505afb0", 00:28:16.790 "name": "lvs_0", 00:28:16.790 "base_bdev": "Nvme0n1", 00:28:16.790 "total_data_clusters": 238234, 00:28:16.790 "free_clusters": 238234, 00:28:16.790 "block_size": 512, 00:28:16.790 "cluster_size": 4194304 00:28:16.790 } 00:28:16.790 ]' 00:28:16.790 18:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="b864943b-79d0-4bfd-a559-d10ee505afb0") .free_clusters' 00:28:16.790 18:58:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:28:16.790 18:58:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="b864943b-79d0-4bfd-a559-d10ee505afb0") .cluster_size' 00:28:16.790 18:58:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:16.790 18:58:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:28:16.790 18:58:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:28:16.790 952936 00:28:16.790 18:58:27 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:16.790 18:58:27 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:16.790 18:58:27 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b864943b-79d0-4bfd-a559-d10ee505afb0 lbd_0 20480 00:28:17.720 18:58:27 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=38a8cdf6-7c68-4fa7-ab9f-d118ad043944 00:28:17.720 18:58:27 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 38a8cdf6-7c68-4fa7-ab9f-d118ad043944 lvs_n_0 00:28:18.283 18:58:28 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=9826e023-dd5d-4f63-8c6f-2cd75f72551f 00:28:18.283 18:58:28 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 9826e023-dd5d-4f63-8c6f-2cd75f72551f 00:28:18.283 18:58:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=9826e023-dd5d-4f63-8c6f-2cd75f72551f 00:28:18.283 18:58:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:18.283 18:58:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:18.283 18:58:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:18.283 18:58:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:18.540 18:58:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:18.540 { 00:28:18.540 "uuid": "b864943b-79d0-4bfd-a559-d10ee505afb0", 00:28:18.540 "name": "lvs_0", 00:28:18.540 "base_bdev": "Nvme0n1", 00:28:18.540 "total_data_clusters": 238234, 00:28:18.540 "free_clusters": 233114, 00:28:18.540 "block_size": 512, 00:28:18.540 "cluster_size": 4194304 00:28:18.540 }, 00:28:18.540 { 00:28:18.540 "uuid": "9826e023-dd5d-4f63-8c6f-2cd75f72551f", 00:28:18.540 "name": "lvs_n_0", 00:28:18.540 "base_bdev": "38a8cdf6-7c68-4fa7-ab9f-d118ad043944", 00:28:18.540 "total_data_clusters": 5114, 00:28:18.540 "free_clusters": 5114, 00:28:18.540 "block_size": 512, 00:28:18.540 "cluster_size": 4194304 00:28:18.540 } 00:28:18.540 ]' 00:28:18.540 18:58:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="9826e023-dd5d-4f63-8c6f-2cd75f72551f") .free_clusters' 00:28:18.540 18:58:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:28:18.540 18:58:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="9826e023-dd5d-4f63-8c6f-2cd75f72551f") .cluster_size' 00:28:18.540 18:58:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:18.540 18:58:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:28:18.540 18:58:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:28:18.540 20456 00:28:18.540 18:58:28 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:18.540 18:58:28 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9826e023-dd5d-4f63-8c6f-2cd75f72551f lbd_nest_0 20456 00:28:18.797 18:58:29 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=cd76fdb1-001e-444d-9f8d-74bef7336a1e 00:28:18.797 18:58:29 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:19.055 18:58:29 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:19.055 18:58:29 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 cd76fdb1-001e-444d-9f8d-74bef7336a1e 00:28:19.312 18:58:29 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:19.569 18:58:29 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:19.569 18:58:29 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:19.569 18:58:29 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:19.569 18:58:29 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:19.569 18:58:29 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:19.569 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.800 Initializing NVMe Controllers 00:28:31.800 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:31.800 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:31.800 Initialization complete. Launching workers. 00:28:31.800 ======================================================== 00:28:31.800 Latency(us) 00:28:31.800 Device Information : IOPS MiB/s Average min max 00:28:31.800 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.40 0.02 21164.97 298.20 46029.50 00:28:31.800 ======================================================== 00:28:31.800 Total : 47.40 0.02 21164.97 298.20 46029.50 00:28:31.800 00:28:31.800 18:58:40 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:31.800 18:58:40 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:31.800 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.763 Initializing NVMe Controllers 00:28:41.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:41.763 Initialization complete. Launching workers. 00:28:41.763 ======================================================== 00:28:41.763 Latency(us) 00:28:41.763 Device Information : IOPS MiB/s Average min max 00:28:41.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.30 10.54 11870.73 5507.75 47901.42 00:28:41.763 ======================================================== 00:28:41.763 Total : 84.30 10.54 11870.73 5507.75 47901.42 00:28:41.763 00:28:41.763 18:58:50 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:41.763 18:58:50 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:41.763 18:58:50 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:41.763 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.720 Initializing NVMe Controllers 00:28:51.720 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:51.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:51.720 Initialization complete. Launching workers. 00:28:51.720 ======================================================== 00:28:51.720 Latency(us) 00:28:51.720 Device Information : IOPS MiB/s Average min max 00:28:51.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6462.00 3.16 4951.99 373.70 12105.76 00:28:51.720 ======================================================== 00:28:51.721 Total : 6462.00 3.16 4951.99 373.70 12105.76 00:28:51.721 00:28:51.721 18:59:01 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:51.721 18:59:01 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:51.721 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.683 Initializing NVMe Controllers 00:29:01.683 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:01.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:01.683 Initialization complete. Launching workers. 00:29:01.683 ======================================================== 00:29:01.683 Latency(us) 00:29:01.683 Device Information : IOPS MiB/s Average min max 00:29:01.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1217.90 152.24 26347.33 1966.02 71007.15 00:29:01.683 ======================================================== 00:29:01.683 Total : 1217.90 152.24 26347.33 1966.02 71007.15 00:29:01.683 00:29:01.683 18:59:11 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:01.683 18:59:11 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:01.683 18:59:11 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:01.683 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.658 Initializing NVMe Controllers 00:29:11.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:11.658 Controller IO queue size 128, less than required. 00:29:11.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:11.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:11.658 Initialization complete. Launching workers. 00:29:11.658 ======================================================== 00:29:11.659 Latency(us) 00:29:11.659 Device Information : IOPS MiB/s Average min max 00:29:11.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11778.49 5.75 10871.81 1789.27 24635.91 00:29:11.659 ======================================================== 00:29:11.659 Total : 11778.49 5.75 10871.81 1789.27 24635.91 00:29:11.659 00:29:11.917 18:59:21 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:11.917 18:59:21 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:11.917 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.116 Initializing NVMe Controllers 00:29:24.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:24.116 Controller IO queue size 128, less than required. 00:29:24.116 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:24.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:24.116 Initialization complete. Launching workers. 00:29:24.116 ======================================================== 00:29:24.116 Latency(us) 00:29:24.116 Device Information : IOPS MiB/s Average min max 00:29:24.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1185.01 148.13 108221.07 29821.78 220356.61 00:29:24.116 ======================================================== 00:29:24.116 Total : 1185.01 148.13 108221.07 29821.78 220356.61 00:29:24.116 00:29:24.116 18:59:32 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.116 18:59:32 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cd76fdb1-001e-444d-9f8d-74bef7336a1e 00:29:24.116 18:59:33 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:24.116 18:59:33 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 38a8cdf6-7c68-4fa7-ab9f-d118ad043944 00:29:24.116 18:59:33 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:24.116 rmmod nvme_tcp 00:29:24.116 rmmod nvme_fabrics 00:29:24.116 rmmod nvme_keyring 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1482989 ']' 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1482989 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 1482989 ']' 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 1482989 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1482989 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1482989' 00:29:24.116 killing process with pid 1482989 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 1482989 00:29:24.116 18:59:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 1482989 00:29:26.013 18:59:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:26.013 18:59:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:26.013 18:59:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:26.013 18:59:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:26.013 18:59:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:26.013 18:59:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.013 18:59:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:26.013 18:59:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.915 18:59:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:27.915 00:29:27.915 real 1m32.073s 00:29:27.915 user 5m40.249s 00:29:27.915 sys 0m14.569s 00:29:27.915 18:59:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:27.915 18:59:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:27.915 ************************************ 00:29:27.915 END TEST nvmf_perf 00:29:27.915 ************************************ 00:29:27.915 18:59:37 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:27.915 18:59:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:27.915 18:59:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:27.915 18:59:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:27.915 ************************************ 00:29:27.915 START TEST nvmf_fio_host 00:29:27.915 ************************************ 00:29:27.915 18:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:27.915 * Looking for test storage... 00:29:27.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.915 18:59:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:27.916 18:59:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:29.809 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:29.809 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.809 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:29.810 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:29.810 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:29.810 18:59:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:29.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:29:29.810 00:29:29.810 --- 10.0.0.2 ping statistics --- 00:29:29.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.810 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:29.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:29:29.810 00:29:29.810 --- 10.0.0.1 ping statistics --- 00:29:29.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.810 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1495082 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1495082 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 1495082 ']' 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:29.810 18:59:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.810 [2024-07-20 18:59:40.090217] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:29.810 [2024-07-20 18:59:40.090292] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.810 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.074 [2024-07-20 18:59:40.156860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:30.074 [2024-07-20 18:59:40.241479] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.074 [2024-07-20 18:59:40.241533] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.074 [2024-07-20 18:59:40.241556] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.074 [2024-07-20 18:59:40.241588] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.074 [2024-07-20 18:59:40.241598] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.074 [2024-07-20 18:59:40.241687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.074 [2024-07-20 18:59:40.241752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.074 [2024-07-20 18:59:40.241818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:30.074 [2024-07-20 18:59:40.241821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.074 18:59:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:30.074 18:59:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:29:30.074 18:59:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:30.331 [2024-07-20 18:59:40.586987] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.331 18:59:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:30.331 18:59:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.331 18:59:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.331 18:59:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:30.588 Malloc1 00:29:30.588 18:59:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:30.845 18:59:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:31.101 18:59:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.358 [2024-07-20 18:59:41.585392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.358 18:59:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:31.615 18:59:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:31.615 18:59:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:31.615 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:31.615 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:31.615 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:31.616 18:59:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:31.873 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:31.873 fio-3.35 00:29:31.873 Starting 1 thread 00:29:31.873 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.395 00:29:34.395 test: (groupid=0, jobs=1): err= 0: pid=1495439: Sat Jul 20 18:59:44 2024 00:29:34.395 read: IOPS=8983, BW=35.1MiB/s (36.8MB/s)(70.4MiB/2005msec) 00:29:34.396 slat (usec): min=2, max=110, avg= 2.52, stdev= 1.46 00:29:34.396 clat (usec): min=4334, max=13509, avg=8437.28, stdev=1341.21 00:29:34.396 lat (usec): min=4337, max=13511, avg=8439.80, stdev=1341.20 00:29:34.396 clat percentiles (usec): 00:29:34.396 | 1.00th=[ 5604], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7308], 00:29:34.396 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 8586], 00:29:34.396 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10290], 95.00th=[10814], 00:29:34.396 | 99.00th=[11863], 99.50th=[12518], 99.90th=[13173], 99.95th=[13173], 00:29:34.396 | 99.99th=[13566] 00:29:34.396 bw ( KiB/s): min=34200, max=36640, per=99.84%, avg=35874.00, stdev=1148.63, samples=4 00:29:34.396 iops : min= 8550, max= 9160, avg=8968.50, stdev=287.16, samples=4 00:29:34.396 write: IOPS=9007, BW=35.2MiB/s (36.9MB/s)(70.5MiB/2005msec); 0 zone resets 00:29:34.396 slat (nsec): min=2071, max=87337, avg=2538.23, stdev=1160.00 00:29:34.396 clat (usec): min=2653, max=9392, avg=5707.20, stdev=937.98 00:29:34.396 lat (usec): min=2655, max=9394, avg=5709.74, stdev=938.02 00:29:34.396 clat percentiles (usec): 00:29:34.396 | 1.00th=[ 3458], 5.00th=[ 4015], 10.00th=[ 4424], 20.00th=[ 4883], 00:29:34.396 | 30.00th=[ 5276], 40.00th=[ 5538], 50.00th=[ 5800], 60.00th=[ 6063], 00:29:34.396 | 70.00th=[ 6259], 80.00th=[ 6456], 90.00th=[ 6849], 95.00th=[ 7111], 00:29:34.396 | 99.00th=[ 7767], 99.50th=[ 8029], 99.90th=[ 8455], 99.95th=[ 8717], 00:29:34.396 | 99.99th=[ 9372] 00:29:34.396 bw ( KiB/s): min=35208, max=36592, per=99.96%, avg=36014.00, stdev=585.42, samples=4 00:29:34.396 iops : min= 8802, max= 9148, avg=9003.50, stdev=146.35, samples=4 00:29:34.396 lat (msec) : 4=2.38%, 10=91.02%, 20=6.59% 00:29:34.396 cpu : usr=63.77%, sys=30.14%, ctx=38, majf=0, minf=6 00:29:34.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:34.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:34.396 issued rwts: total=18011,18060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:34.396 00:29:34.396 Run status group 0 (all jobs): 00:29:34.396 READ: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.4MiB (73.8MB), run=2005-2005msec 00:29:34.396 WRITE: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.5MiB (74.0MB), run=2005-2005msec 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:34.396 18:59:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:34.396 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:34.396 fio-3.35 00:29:34.396 Starting 1 thread 00:29:34.396 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.935 00:29:36.935 test: (groupid=0, jobs=1): err= 0: pid=1495774: Sat Jul 20 18:59:47 2024 00:29:36.935 read: IOPS=5011, BW=78.3MiB/s (82.1MB/s)(157MiB/2009msec) 00:29:36.935 slat (usec): min=3, max=105, avg= 3.98, stdev= 1.99 00:29:36.935 clat (usec): min=5007, max=40416, avg=15302.90, stdev=4448.86 00:29:36.935 lat (usec): min=5011, max=40421, avg=15306.88, stdev=4448.92 00:29:36.935 clat percentiles (usec): 00:29:36.935 | 1.00th=[ 7177], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[11338], 00:29:36.935 | 30.00th=[12649], 40.00th=[13829], 50.00th=[15008], 60.00th=[16057], 00:29:36.935 | 70.00th=[17171], 80.00th=[19006], 90.00th=[21365], 95.00th=[23462], 00:29:36.935 | 99.00th=[26608], 99.50th=[27657], 99.90th=[33162], 99.95th=[33817], 00:29:36.935 | 99.99th=[34866] 00:29:36.935 bw ( KiB/s): min=28096, max=62272, per=52.70%, avg=42256.00, stdev=14369.78, samples=4 00:29:36.935 iops : min= 1756, max= 3892, avg=2641.00, stdev=898.11, samples=4 00:29:36.935 write: IOPS=2986, BW=46.7MiB/s (48.9MB/s)(86.7MiB/1859msec); 0 zone resets 00:29:36.935 slat (usec): min=30, max=185, avg=35.60, stdev= 6.90 00:29:36.935 clat (usec): min=7106, max=34922, avg=17870.34, stdev=5195.87 00:29:36.935 lat (usec): min=7137, max=34956, avg=17905.94, stdev=5196.21 00:29:36.935 clat percentiles (usec): 00:29:36.935 | 1.00th=[ 9503], 5.00th=[10552], 10.00th=[11338], 20.00th=[12256], 00:29:36.935 | 30.00th=[13698], 40.00th=[15533], 50.00th=[18482], 60.00th=[20055], 00:29:36.935 | 70.00th=[21365], 80.00th=[22676], 90.00th=[24511], 95.00th=[26084], 00:29:36.935 | 99.00th=[28705], 99.50th=[30278], 99.90th=[34341], 99.95th=[34866], 00:29:36.935 | 99.99th=[34866] 00:29:36.935 bw ( KiB/s): min=30656, max=65280, per=92.40%, avg=44144.00, stdev=14818.97, samples=4 00:29:36.935 iops : min= 1916, max= 4080, avg=2759.00, stdev=926.19, samples=4 00:29:36.935 lat (msec) : 10=8.32%, 20=67.31%, 50=24.37% 00:29:36.935 cpu : usr=63.15%, sys=27.24%, ctx=69, majf=0, minf=2 00:29:36.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:36.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:36.935 issued rwts: total=10068,5551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:36.935 00:29:36.935 Run status group 0 (all jobs): 00:29:36.935 READ: bw=78.3MiB/s (82.1MB/s), 78.3MiB/s-78.3MiB/s (82.1MB/s-82.1MB/s), io=157MiB (165MB), run=2009-2009msec 00:29:36.935 WRITE: bw=46.7MiB/s (48.9MB/s), 46.7MiB/s-46.7MiB/s (48.9MB/s-48.9MB/s), io=86.7MiB (90.9MB), run=1859-1859msec 00:29:36.935 18:59:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:37.193 18:59:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:37.193 18:59:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:37.193 18:59:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:37.193 18:59:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:37.193 18:59:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:29:37.193 18:59:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:37.193 18:59:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:37.193 18:59:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:37.193 18:59:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:37.193 18:59:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:29:37.193 18:59:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:29:40.474 Nvme0n1 00:29:40.474 18:59:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:43.746 18:59:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=46cd5e51-8c33-447f-a425-5e47d7fdaf92 00:29:43.746 18:59:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 46cd5e51-8c33-447f-a425-5e47d7fdaf92 00:29:43.746 18:59:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=46cd5e51-8c33-447f-a425-5e47d7fdaf92 00:29:43.746 18:59:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:43.746 18:59:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:29:43.746 18:59:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:29:43.746 18:59:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:43.746 18:59:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:43.746 { 00:29:43.746 "uuid": "46cd5e51-8c33-447f-a425-5e47d7fdaf92", 00:29:43.746 "name": "lvs_0", 00:29:43.746 "base_bdev": "Nvme0n1", 00:29:43.746 "total_data_clusters": 930, 00:29:43.746 "free_clusters": 930, 00:29:43.746 "block_size": 512, 00:29:43.746 "cluster_size": 1073741824 00:29:43.746 } 00:29:43.746 ]' 00:29:43.746 18:59:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="46cd5e51-8c33-447f-a425-5e47d7fdaf92") .free_clusters' 00:29:43.746 18:59:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:29:43.746 18:59:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="46cd5e51-8c33-447f-a425-5e47d7fdaf92") .cluster_size' 00:29:43.746 18:59:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:29:43.746 18:59:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:29:43.746 18:59:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:29:43.746 952320 00:29:43.746 18:59:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:44.004 431d2368-dafc-49b2-920a-73052a581830 00:29:44.004 18:59:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:44.262 18:59:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:44.520 18:59:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:44.778 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:44.779 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:44.779 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:44.779 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:44.779 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:44.779 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:44.779 18:59:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:45.035 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:45.035 fio-3.35 00:29:45.035 Starting 1 thread 00:29:45.035 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.565 00:29:47.565 test: (groupid=0, jobs=1): err= 0: pid=1497089: Sat Jul 20 18:59:57 2024 00:29:47.565 read: IOPS=5919, BW=23.1MiB/s (24.2MB/s)(46.5MiB/2009msec) 00:29:47.565 slat (usec): min=2, max=127, avg= 2.65, stdev= 1.96 00:29:47.565 clat (usec): min=1735, max=171942, avg=12012.17, stdev=11735.62 00:29:47.565 lat (usec): min=1738, max=171978, avg=12014.81, stdev=11735.85 00:29:47.565 clat percentiles (msec): 00:29:47.565 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:29:47.565 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:29:47.565 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 14], 00:29:47.565 | 99.00th=[ 16], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:29:47.565 | 99.99th=[ 171] 00:29:47.565 bw ( KiB/s): min=16856, max=26040, per=99.88%, avg=23650.00, stdev=4530.40, samples=4 00:29:47.565 iops : min= 4214, max= 6510, avg=5912.50, stdev=1132.60, samples=4 00:29:47.565 write: IOPS=5915, BW=23.1MiB/s (24.2MB/s)(46.4MiB/2009msec); 0 zone resets 00:29:47.565 slat (nsec): min=2090, max=92188, avg=2696.00, stdev=1367.40 00:29:47.565 clat (usec): min=478, max=170355, avg=9517.60, stdev=11005.34 00:29:47.565 lat (usec): min=480, max=170360, avg=9520.30, stdev=11005.54 00:29:47.565 clat percentiles (msec): 00:29:47.565 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:29:47.565 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:29:47.565 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:29:47.565 | 99.00th=[ 12], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 171], 00:29:47.565 | 99.99th=[ 171] 00:29:47.565 bw ( KiB/s): min=17880, max=25728, per=99.94%, avg=23648.00, stdev=3849.50, samples=4 00:29:47.565 iops : min= 4470, max= 6432, avg=5912.00, stdev=962.38, samples=4 00:29:47.565 lat (usec) : 500=0.01%, 1000=0.01% 00:29:47.565 lat (msec) : 2=0.03%, 4=0.09%, 10=52.71%, 20=46.62%, 250=0.54% 00:29:47.565 cpu : usr=51.74%, sys=39.79%, ctx=64, majf=0, minf=20 00:29:47.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:47.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:47.565 issued rwts: total=11892,11884,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.565 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:47.565 00:29:47.565 Run status group 0 (all jobs): 00:29:47.565 READ: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.5MiB (48.7MB), run=2009-2009msec 00:29:47.565 WRITE: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.4MiB (48.7MB), run=2009-2009msec 00:29:47.565 18:59:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:47.565 18:59:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:48.498 18:59:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=2772e6fc-8610-4a0b-9aa4-7c645609fb23 00:29:48.498 18:59:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 2772e6fc-8610-4a0b-9aa4-7c645609fb23 00:29:48.498 18:59:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=2772e6fc-8610-4a0b-9aa4-7c645609fb23 00:29:48.498 18:59:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:48.498 18:59:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:29:48.498 18:59:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:29:48.498 18:59:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:48.755 18:59:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:48.755 { 00:29:48.755 "uuid": "46cd5e51-8c33-447f-a425-5e47d7fdaf92", 00:29:48.755 "name": "lvs_0", 00:29:48.755 "base_bdev": "Nvme0n1", 00:29:48.755 "total_data_clusters": 930, 00:29:48.755 "free_clusters": 0, 00:29:48.755 "block_size": 512, 00:29:48.755 "cluster_size": 1073741824 00:29:48.755 }, 00:29:48.755 { 00:29:48.755 "uuid": "2772e6fc-8610-4a0b-9aa4-7c645609fb23", 00:29:48.755 "name": "lvs_n_0", 00:29:48.755 "base_bdev": "431d2368-dafc-49b2-920a-73052a581830", 00:29:48.755 "total_data_clusters": 237847, 00:29:48.755 "free_clusters": 237847, 00:29:48.755 "block_size": 512, 00:29:48.755 "cluster_size": 4194304 00:29:48.755 } 00:29:48.755 ]' 00:29:48.756 18:59:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="2772e6fc-8610-4a0b-9aa4-7c645609fb23") .free_clusters' 00:29:49.013 18:59:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:29:49.013 18:59:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="2772e6fc-8610-4a0b-9aa4-7c645609fb23") .cluster_size' 00:29:49.013 18:59:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:29:49.013 18:59:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:29:49.013 18:59:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:29:49.013 951388 00:29:49.013 18:59:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:49.577 4f734709-9688-4264-9a38-76c9a55b1096 00:29:49.577 18:59:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:49.834 19:00:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:50.090 19:00:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:50.347 19:00:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:50.603 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:50.603 fio-3.35 00:29:50.603 Starting 1 thread 00:29:50.603 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.123 00:29:53.123 test: (groupid=0, jobs=1): err= 0: pid=1497963: Sat Jul 20 19:00:03 2024 00:29:53.123 read: IOPS=5755, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2009msec) 00:29:53.123 slat (nsec): min=1959, max=168464, avg=2625.46, stdev=2344.47 00:29:53.123 clat (usec): min=6280, max=20810, avg=12297.78, stdev=1117.92 00:29:53.123 lat (usec): min=6297, max=20813, avg=12300.41, stdev=1117.83 00:29:53.123 clat percentiles (usec): 00:29:53.123 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:29:53.123 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:29:53.123 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[14091], 00:29:53.123 | 99.00th=[14877], 99.50th=[15270], 99.90th=[17957], 99.95th=[19268], 00:29:53.123 | 99.99th=[20579] 00:29:53.123 bw ( KiB/s): min=21096, max=23744, per=99.83%, avg=22982.00, stdev=1261.29, samples=4 00:29:53.123 iops : min= 5274, max= 5936, avg=5745.50, stdev=315.32, samples=4 00:29:53.123 write: IOPS=5742, BW=22.4MiB/s (23.5MB/s)(45.1MiB/2009msec); 0 zone resets 00:29:53.123 slat (usec): min=2, max=151, avg= 2.75, stdev= 1.83 00:29:53.123 clat (usec): min=3700, max=19177, avg=9758.57, stdev=1012.65 00:29:53.123 lat (usec): min=3708, max=19180, avg=9761.32, stdev=1012.66 00:29:53.123 clat percentiles (usec): 00:29:53.123 | 1.00th=[ 7504], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 8979], 00:29:53.123 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:29:53.123 | 70.00th=[10159], 80.00th=[10552], 90.00th=[10945], 95.00th=[11338], 00:29:53.123 | 99.00th=[11863], 99.50th=[12387], 99.90th=[17695], 99.95th=[18220], 00:29:53.123 | 99.99th=[19268] 00:29:53.123 bw ( KiB/s): min=22104, max=23464, per=99.96%, avg=22960.00, stdev=591.28, samples=4 00:29:53.123 iops : min= 5526, max= 5866, avg=5740.00, stdev=147.82, samples=4 00:29:53.123 lat (msec) : 4=0.01%, 10=31.21%, 20=68.76%, 50=0.01% 00:29:53.123 cpu : usr=48.90%, sys=41.68%, ctx=90, majf=0, minf=20 00:29:53.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:53.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:53.123 issued rwts: total=11562,11536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:53.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:53.123 00:29:53.123 Run status group 0 (all jobs): 00:29:53.123 READ: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2009-2009msec 00:29:53.123 WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.1MiB (47.3MB), run=2009-2009msec 00:29:53.123 19:00:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:53.123 19:00:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:53.123 19:00:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:57.298 19:00:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:57.298 19:00:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:00.572 19:00:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:00.572 19:00:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:02.467 rmmod nvme_tcp 00:30:02.467 rmmod nvme_fabrics 00:30:02.467 rmmod nvme_keyring 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1495082 ']' 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1495082 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 1495082 ']' 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 1495082 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1495082 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1495082' 00:30:02.467 killing process with pid 1495082 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 1495082 00:30:02.467 19:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 1495082 00:30:02.724 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:02.724 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:02.724 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:02.724 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:02.724 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:02.724 19:00:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.724 19:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:02.724 19:00:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.621 19:00:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:04.621 00:30:04.621 real 0m36.950s 00:30:04.621 user 2m19.522s 00:30:04.621 sys 0m7.601s 00:30:04.621 19:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:04.621 19:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.621 ************************************ 00:30:04.621 END TEST nvmf_fio_host 00:30:04.621 ************************************ 00:30:04.878 19:00:14 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:04.878 19:00:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:04.878 19:00:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:04.878 19:00:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:04.878 ************************************ 00:30:04.878 START TEST nvmf_failover 00:30:04.878 ************************************ 00:30:04.878 19:00:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:04.878 * Looking for test storage... 00:30:04.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:04.878 19:00:15 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.878 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:04.878 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.878 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:04.879 19:00:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:06.779 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:06.779 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:06.779 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:06.780 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:06.780 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:06.780 19:00:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:06.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:30:06.780 00:30:06.780 --- 10.0.0.2 ping statistics --- 00:30:06.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.780 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:30:06.780 00:30:06.780 --- 10.0.0.1 ping statistics --- 00:30:06.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.780 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1501760 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1501760 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1501760 ']' 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:06.780 19:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:07.038 [2024-07-20 19:00:17.139647] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:07.038 [2024-07-20 19:00:17.139730] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.038 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.038 [2024-07-20 19:00:17.210709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:07.038 [2024-07-20 19:00:17.302235] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.038 [2024-07-20 19:00:17.302298] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.038 [2024-07-20 19:00:17.302324] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.038 [2024-07-20 19:00:17.302337] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.038 [2024-07-20 19:00:17.302349] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.038 [2024-07-20 19:00:17.302435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:07.038 [2024-07-20 19:00:17.302548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:07.038 [2024-07-20 19:00:17.302550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.295 19:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:07.295 19:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:07.295 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:07.295 19:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:07.295 19:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:07.295 19:00:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.295 19:00:17 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:07.552 [2024-07-20 19:00:17.707711] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.552 19:00:17 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:07.810 Malloc0 00:30:07.810 19:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:08.066 19:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:08.323 19:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:08.579 [2024-07-20 19:00:18.824678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.579 19:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:08.835 [2024-07-20 19:00:19.097393] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:08.836 19:00:19 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:09.093 [2024-07-20 19:00:19.342215] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:09.093 19:00:19 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1502054 00:30:09.093 19:00:19 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:09.093 19:00:19 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:09.093 19:00:19 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1502054 /var/tmp/bdevperf.sock 00:30:09.093 19:00:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1502054 ']' 00:30:09.093 19:00:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:09.093 19:00:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:09.093 19:00:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:09.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:09.093 19:00:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:09.093 19:00:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:09.351 19:00:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:09.351 19:00:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:09.351 19:00:19 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:09.915 NVMe0n1 00:30:09.915 19:00:19 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:10.172 00:30:10.172 19:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1502189 00:30:10.172 19:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:10.172 19:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:11.542 19:00:21 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.542 [2024-07-20 19:00:21.719015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.542 [2024-07-20 19:00:21.719110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.542 [2024-07-20 19:00:21.719127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.542 [2024-07-20 19:00:21.719140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 [2024-07-20 19:00:21.719585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1722d50 is same with the state(5) to be set 00:30:11.543 19:00:21 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:14.821 19:00:24 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:14.821 00:30:14.821 19:00:25 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:15.078 [2024-07-20 19:00:25.377817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.377881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.377896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.377909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.377922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.377934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.377947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.377959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.377971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.377984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.377997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.378010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.378023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.378036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.378048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.378061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.378074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.378087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.078 [2024-07-20 19:00:25.378114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 [2024-07-20 19:00:25.378364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723bd0 is same with the state(5) to be set 00:30:15.079 19:00:25 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:18.360 19:00:28 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.360 [2024-07-20 19:00:28.624123] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.360 19:00:28 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:19.733 19:00:29 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:19.733 [2024-07-20 19:00:29.873691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.873992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 [2024-07-20 19:00:29.874363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724750 is same with the state(5) to be set 00:30:19.733 19:00:29 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1502189 00:30:26.330 0 00:30:26.330 19:00:35 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1502054 00:30:26.330 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1502054 ']' 00:30:26.330 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1502054 00:30:26.330 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:26.330 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:26.330 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1502054 00:30:26.330 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:26.330 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:26.330 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1502054' 00:30:26.330 killing process with pid 1502054 00:30:26.330 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1502054 00:30:26.330 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1502054 00:30:26.330 19:00:35 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:26.330 [2024-07-20 19:00:19.403732] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:26.330 [2024-07-20 19:00:19.403841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502054 ] 00:30:26.330 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.330 [2024-07-20 19:00:19.464951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.330 [2024-07-20 19:00:19.549487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.330 Running I/O for 15 seconds... 00:30:26.330 [2024-07-20 19:00:21.720681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.720724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.720750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.330 [2024-07-20 19:00:21.720768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.720821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.330 [2024-07-20 19:00:21.720837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.720853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.330 [2024-07-20 19:00:21.720868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.720885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.330 [2024-07-20 19:00:21.720899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.720914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.330 [2024-07-20 19:00:21.720929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.720944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.330 [2024-07-20 19:00:21.720957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.720973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.330 [2024-07-20 19:00:21.720987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.330 [2024-07-20 19:00:21.721016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-07-20 19:00:21.721494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.330 [2024-07-20 19:00:21.721508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.721986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.721999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-07-20 19:00:21.722239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-07-20 19:00:21.722272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-07-20 19:00:21.722747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.331 [2024-07-20 19:00:21.722761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.722806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.332 [2024-07-20 19:00:21.722822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.722838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.332 [2024-07-20 19:00:21.722852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.722869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.332 [2024-07-20 19:00:21.722883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.722899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.332 [2024-07-20 19:00:21.722912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.722928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.332 [2024-07-20 19:00:21.722942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.722958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.332 [2024-07-20 19:00:21.722972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.722987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.332 [2024-07-20 19:00:21.723001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.332 [2024-07-20 19:00:21.723034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.332 [2024-07-20 19:00:21.723064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.332 [2024-07-20 19:00:21.723118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.332 [2024-07-20 19:00:21.723147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.332 [2024-07-20 19:00:21.723175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.332 [2024-07-20 19:00:21.723204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.332 [2024-07-20 19:00:21.723233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.332 [2024-07-20 19:00:21.723261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.723308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85072 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.723322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.332 [2024-07-20 19:00:21.723351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.723362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85080 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.723375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.332 [2024-07-20 19:00:21.723399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.723410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85088 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.723423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.332 [2024-07-20 19:00:21.723452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.723463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85096 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.723476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.332 [2024-07-20 19:00:21.723501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.723512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85104 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.723525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.332 [2024-07-20 19:00:21.723549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.723560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85112 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.723573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.332 [2024-07-20 19:00:21.723597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.723608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85120 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.723621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.332 [2024-07-20 19:00:21.723645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.723656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85128 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.723668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.332 [2024-07-20 19:00:21.723692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.723703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85136 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.723716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.332 [2024-07-20 19:00:21.723740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.723751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85144 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.723764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.332 [2024-07-20 19:00:21.723824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.723837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85152 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.723855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.332 [2024-07-20 19:00:21.723881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.723893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85160 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.723906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.332 [2024-07-20 19:00:21.723932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.723943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85168 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.723956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.723971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.332 [2024-07-20 19:00:21.723982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.723994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85176 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.724008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.724021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.332 [2024-07-20 19:00:21.724032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.724045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85184 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.724058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.724072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.332 [2024-07-20 19:00:21.724083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.332 [2024-07-20 19:00:21.724101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85192 len:8 PRP1 0x0 PRP2 0x0 00:30:26.332 [2024-07-20 19:00:21.724129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-07-20 19:00:21.724143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85200 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85208 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85216 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85224 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85232 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85240 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85248 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84440 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84448 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84456 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84464 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84472 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84480 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85256 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85264 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85272 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.724961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.724972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.724984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85280 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.724997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.725011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.725023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.725034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85288 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.725048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.725065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.725077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.725090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85296 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.725118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.725132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.725143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.725154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85304 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.725167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.725180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.725191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.725202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85312 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.725215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.725229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.725239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.725251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85320 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.725264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.725277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.725288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.725299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85328 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.725312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.725325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.725342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.725354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85336 len:8 PRP1 0x0 PRP2 0x0 00:30:26.333 [2024-07-20 19:00:21.725367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-07-20 19:00:21.725381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.333 [2024-07-20 19:00:21.725392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.333 [2024-07-20 19:00:21.725403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85344 len:8 PRP1 0x0 PRP2 0x0 00:30:26.334 [2024-07-20 19:00:21.725416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:21.725429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.334 [2024-07-20 19:00:21.725440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.334 [2024-07-20 19:00:21.725451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85352 len:8 PRP1 0x0 PRP2 0x0 00:30:26.334 [2024-07-20 19:00:21.725468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:21.725481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.334 [2024-07-20 19:00:21.725492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.334 [2024-07-20 19:00:21.725504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85360 len:8 PRP1 0x0 PRP2 0x0 00:30:26.334 [2024-07-20 19:00:21.725517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:21.725530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.334 [2024-07-20 19:00:21.725541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.334 [2024-07-20 19:00:21.725552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85368 len:8 PRP1 0x0 PRP2 0x0 00:30:26.334 [2024-07-20 19:00:21.725565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:21.725579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.334 [2024-07-20 19:00:21.725590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.334 [2024-07-20 19:00:21.725601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85376 len:8 PRP1 0x0 PRP2 0x0 00:30:26.334 [2024-07-20 19:00:21.725614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:21.725627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.334 [2024-07-20 19:00:21.725638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.334 [2024-07-20 19:00:21.725649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84488 len:8 PRP1 0x0 PRP2 0x0 00:30:26.334 [2024-07-20 19:00:21.725662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:21.725718] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1946b50 was disconnected and freed. reset controller. 00:30:26.334 [2024-07-20 19:00:21.725735] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:26.334 [2024-07-20 19:00:21.725768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.334 [2024-07-20 19:00:21.725821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:21.725838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.334 [2024-07-20 19:00:21.725858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:21.725873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.334 [2024-07-20 19:00:21.725886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:21.725900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.334 [2024-07-20 19:00:21.725914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:21.725927] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.334 [2024-07-20 19:00:21.725990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1927eb0 (9): Bad file descriptor 00:30:26.334 [2024-07-20 19:00:21.729275] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.334 [2024-07-20 19:00:21.759282] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:26.334 [2024-07-20 19:00:25.378518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.334 [2024-07-20 19:00:25.378562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.378587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.334 [2024-07-20 19:00:25.378603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.378617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.334 [2024-07-20 19:00:25.378631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.378645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.334 [2024-07-20 19:00:25.378659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.378673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1927eb0 is same with the state(5) to be set 00:30:26.334 [2024-07-20 19:00:25.378749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-07-20 19:00:25.378771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.378805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-07-20 19:00:25.378825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.378843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-07-20 19:00:25.378858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.378874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-07-20 19:00:25.378889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.378904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-07-20 19:00:25.378918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.378934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-07-20 19:00:25.378948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.378963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-07-20 19:00:25.378978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.378999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-07-20 19:00:25.379037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.379053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-07-20 19:00:25.379066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.379081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-07-20 19:00:25.379109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.379124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-07-20 19:00:25.379138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.379152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-07-20 19:00:25.379166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.379182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-07-20 19:00:25.379196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-07-20 19:00:25.379211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-07-20 19:00:25.379224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.379971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.379986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.380000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.335 [2024-07-20 19:00:25.380029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.380065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.380094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.380137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.380166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.380198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.380226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.380254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-07-20 19:00:25.380280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.335 [2024-07-20 19:00:25.380323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.335 [2024-07-20 19:00:25.380355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.335 [2024-07-20 19:00:25.380385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.335 [2024-07-20 19:00:25.380415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.335 [2024-07-20 19:00:25.380444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.335 [2024-07-20 19:00:25.380476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.335 [2024-07-20 19:00:25.380506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-07-20 19:00:25.380522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.335 [2024-07-20 19:00:25.380536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.380553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.380575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.380591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.380605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.380620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.380634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.380649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.380663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.380678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.380692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.380707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.380720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.380735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.380748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.380762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.380791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.380818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.380832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.380847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.380861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.380876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.380890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.380906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.380920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.380935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.380948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.380964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.380982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.380997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.381076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.381120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.381150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.381180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.381210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.381239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.381269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-07-20 19:00:25.381298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.336 [2024-07-20 19:00:25.381857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-07-20 19:00:25.381872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.381887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.381903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.381917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.381932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.381947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.381962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.381975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.381991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:25.382520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-07-20 19:00:25.382550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-07-20 19:00:25.382578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-07-20 19:00:25.382607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-07-20 19:00:25.382636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-07-20 19:00:25.382665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-07-20 19:00:25.382693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.337 [2024-07-20 19:00:25.382735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.337 [2024-07-20 19:00:25.382747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99912 len:8 PRP1 0x0 PRP2 0x0 00:30:26.337 [2024-07-20 19:00:25.382760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:25.382846] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1af15b0 was disconnected and freed. reset controller. 00:30:26.337 [2024-07-20 19:00:25.382867] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:26.337 [2024-07-20 19:00:25.382883] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.337 [2024-07-20 19:00:25.386172] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.337 [2024-07-20 19:00:25.386211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1927eb0 (9): Bad file descriptor 00:30:26.337 [2024-07-20 19:00:25.423492] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:26.337 [2024-07-20 19:00:29.875186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:29.875229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:29.875255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:29.875270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:29.875294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:29.875310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:29.875325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:29.875338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:29.875353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:29.875367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:29.875397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:29.875411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:29.875425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:29.875438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:29.875453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:29.875467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:29.875482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:29.875496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:29.875511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:29.875525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-07-20 19:00:29.875540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.337 [2024-07-20 19:00:29.875554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.875569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.875582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.875596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.875609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.875624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.875637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.875651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.875665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.875683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.875697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.875711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.875725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.875739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.875753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.875768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.875781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.875817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.875834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.875850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.875864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.875879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.875894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.875909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.875923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.875938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.875952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.875967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.875981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.875995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-07-20 19:00:29.876602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.338 [2024-07-20 19:00:29.876616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.876631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.876645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.876660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.876673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.876688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.876702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.876717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.876730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.876745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.876759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.876789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.876811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.876827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.876845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.876861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.876876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.876892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.876906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.876921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.876935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.876950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.876964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.876980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.876994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.877023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.877052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.877082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.877127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.877156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.877184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.877213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.877244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.877273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.877302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.877330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.877359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.339 [2024-07-20 19:00:29.877388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-07-20 19:00:29.877915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-07-20 19:00:29.877930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.877944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.877960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.877973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.877988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-07-20 19:00:29.878892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.340 [2024-07-20 19:00:29.878921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.340 [2024-07-20 19:00:29.878950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.340 [2024-07-20 19:00:29.878979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.878995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.340 [2024-07-20 19:00:29.879009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.879037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.340 [2024-07-20 19:00:29.879054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46040 len:8 PRP1 0x0 PRP2 0x0 00:30:26.340 [2024-07-20 19:00:29.879068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.879342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.340 [2024-07-20 19:00:29.879362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.340 [2024-07-20 19:00:29.879375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46048 len:8 PRP1 0x0 PRP2 0x0 00:30:26.340 [2024-07-20 19:00:29.879388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.879405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.340 [2024-07-20 19:00:29.879417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.340 [2024-07-20 19:00:29.879429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46056 len:8 PRP1 0x0 PRP2 0x0 00:30:26.340 [2024-07-20 19:00:29.879442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.879459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.340 [2024-07-20 19:00:29.879476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.340 [2024-07-20 19:00:29.879488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46064 len:8 PRP1 0x0 PRP2 0x0 00:30:26.340 [2024-07-20 19:00:29.879501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-07-20 19:00:29.879514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.340 [2024-07-20 19:00:29.879525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.879537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45496 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.879550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.879563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.879574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.879586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45504 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.879598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.879612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.879622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.879634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45512 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.879647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.879661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.879672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.879683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45520 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.879696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.879709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.879720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.879731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45528 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.879744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.879757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.879768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.879779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45536 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.879816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.879833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.879845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.879860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45544 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.879875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.879889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.879901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.879913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45552 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.879926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.879940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.879951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.879962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45560 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.879976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.879990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45568 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45576 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45584 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45592 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45600 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45608 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45616 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45624 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45632 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45640 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45648 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45656 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45664 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45672 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45680 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.341 [2024-07-20 19:00:29.880753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.341 [2024-07-20 19:00:29.880764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45688 len:8 PRP1 0x0 PRP2 0x0 00:30:26.341 [2024-07-20 19:00:29.880782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-07-20 19:00:29.880817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.880832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.880843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45696 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.880856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.880870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.880886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.880898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45704 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.880911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.880924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.880936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.880947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45712 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.880961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.880974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.880985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.880996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45720 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45728 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45736 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45744 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45752 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45760 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45768 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45776 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45784 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45792 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45800 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45808 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45816 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45824 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45832 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45840 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45848 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45856 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45864 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-07-20 19:00:29.881954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.342 [2024-07-20 19:00:29.881965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.342 [2024-07-20 19:00:29.881977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45872 len:8 PRP1 0x0 PRP2 0x0 00:30:26.342 [2024-07-20 19:00:29.881990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45048 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45056 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45064 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45072 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45080 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45088 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45096 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45880 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45888 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45896 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45904 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45912 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45920 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45928 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45936 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45944 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45952 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45960 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.882959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45968 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.882973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.882986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.882998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.883010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45976 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.883023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.883036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.883051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.883063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45984 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.883077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.883091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.883116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.883132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45992 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.883146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.883160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.883171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.343 [2024-07-20 19:00:29.883182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46000 len:8 PRP1 0x0 PRP2 0x0 00:30:26.343 [2024-07-20 19:00:29.883195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-07-20 19:00:29.883208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.343 [2024-07-20 19:00:29.883219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.883230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45104 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.883244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.883257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.883268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.883279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45112 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.883292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.883306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.883316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.883328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45120 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.883340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.883353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.883365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.883376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45128 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.883388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.883401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.883412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.883424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45136 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.883436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.883453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.883464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.883475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45144 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.883488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.883501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.883512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.883528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45152 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.883541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.883555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.883566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.883577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45160 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.883590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.883603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.883614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.883625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45168 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.883638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.883651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.888882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.888913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45176 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.888930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.888947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.888959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.888971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45184 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.888985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.888998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.889010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.889022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45192 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.889035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.889049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.889060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.889087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45200 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.889106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.889121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.889132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.889144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45208 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.889156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.889170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.889180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.889193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45216 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.889206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.889219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.889230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.889242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45224 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.889255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.889268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.889279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.889290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45232 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.889303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.889316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.889327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.889339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45240 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.889352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.889365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.889376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.889387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45248 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.889399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.889413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.889423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.889434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45256 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.889447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.889461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.889471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.889486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45264 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.889499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.889513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.889524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.889535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45272 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.889547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.889561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.889571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.889582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45280 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.889595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.889608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.889619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.889630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45288 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.889643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.889656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.344 [2024-07-20 19:00:29.889667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.344 [2024-07-20 19:00:29.889678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45296 len:8 PRP1 0x0 PRP2 0x0 00:30:26.344 [2024-07-20 19:00:29.889691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-07-20 19:00:29.889704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.889715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.889726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45304 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.889739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.889753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.889763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.889790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45312 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.889818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.889833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.889845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.889856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45320 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.889870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.889883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.889899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.889911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45328 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.889925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.889938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.889950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.889962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45336 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.889975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.889988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45344 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45352 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45360 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45368 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45376 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45384 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45392 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45400 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45408 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45416 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45424 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45432 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45440 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45448 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45456 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45464 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45472 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45480 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45488 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.890950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.890963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.345 [2024-07-20 19:00:29.890974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.345 [2024-07-20 19:00:29.890992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46008 len:8 PRP1 0x0 PRP2 0x0 00:30:26.345 [2024-07-20 19:00:29.891005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-07-20 19:00:29.891019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.346 [2024-07-20 19:00:29.891030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.346 [2024-07-20 19:00:29.891042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46016 len:8 PRP1 0x0 PRP2 0x0 00:30:26.346 [2024-07-20 19:00:29.891055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-07-20 19:00:29.891068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.346 [2024-07-20 19:00:29.891079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.346 [2024-07-20 19:00:29.891094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46024 len:8 PRP1 0x0 PRP2 0x0 00:30:26.346 [2024-07-20 19:00:29.891108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-07-20 19:00:29.891121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.346 [2024-07-20 19:00:29.891132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.346 [2024-07-20 19:00:29.891143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46032 len:8 PRP1 0x0 PRP2 0x0 00:30:26.346 [2024-07-20 19:00:29.891156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-07-20 19:00:29.891170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:26.346 [2024-07-20 19:00:29.891181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:26.346 [2024-07-20 19:00:29.891192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46040 len:8 PRP1 0x0 PRP2 0x0 00:30:26.346 [2024-07-20 19:00:29.891205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-07-20 19:00:29.891263] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x194b6d0 was disconnected and freed. reset controller. 00:30:26.346 [2024-07-20 19:00:29.891280] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:26.346 [2024-07-20 19:00:29.891320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.346 [2024-07-20 19:00:29.891339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-07-20 19:00:29.891355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.346 [2024-07-20 19:00:29.891368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-07-20 19:00:29.891383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.346 [2024-07-20 19:00:29.891396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-07-20 19:00:29.891410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.346 [2024-07-20 19:00:29.891424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-07-20 19:00:29.891437] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.346 [2024-07-20 19:00:29.891476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1927eb0 (9): Bad file descriptor 00:30:26.346 [2024-07-20 19:00:29.894752] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.346 [2024-07-20 19:00:30.057238] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:26.346 00:30:26.346 Latency(us) 00:30:26.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.346 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:26.346 Verification LBA range: start 0x0 length 0x4000 00:30:26.346 NVMe0n1 : 15.01 8919.50 34.84 588.79 0.00 13433.67 1092.27 20777.34 00:30:26.346 =================================================================================================================== 00:30:26.346 Total : 8919.50 34.84 588.79 0.00 13433.67 1092.27 20777.34 00:30:26.346 Received shutdown signal, test time was about 15.000000 seconds 00:30:26.346 00:30:26.346 Latency(us) 00:30:26.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.346 =================================================================================================================== 00:30:26.346 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:26.346 19:00:35 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:26.346 19:00:35 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:26.346 19:00:35 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:26.346 19:00:35 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1503923 00:30:26.346 19:00:35 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:26.346 19:00:35 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1503923 /var/tmp/bdevperf.sock 00:30:26.346 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1503923 ']' 00:30:26.346 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:26.346 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:26.346 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:26.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:26.346 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:26.346 19:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:26.346 19:00:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:26.346 19:00:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:26.346 19:00:36 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:26.346 [2024-07-20 19:00:36.352461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:26.346 19:00:36 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:26.346 [2024-07-20 19:00:36.589144] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:26.346 19:00:36 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:26.910 NVMe0n1 00:30:26.910 19:00:36 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:27.167 00:30:27.167 19:00:37 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:27.424 00:30:27.682 19:00:37 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:27.682 19:00:37 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:27.682 19:00:37 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:27.939 19:00:38 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:31.216 19:00:41 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:31.216 19:00:41 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:31.216 19:00:41 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1504586 00:30:31.216 19:00:41 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:31.216 19:00:41 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1504586 00:30:32.587 0 00:30:32.587 19:00:42 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:32.587 [2024-07-20 19:00:35.871116] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:32.587 [2024-07-20 19:00:35.871214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503923 ] 00:30:32.587 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.587 [2024-07-20 19:00:35.931831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.587 [2024-07-20 19:00:36.016521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.587 [2024-07-20 19:00:38.206004] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:32.587 [2024-07-20 19:00:38.206088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.587 [2024-07-20 19:00:38.206121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.587 [2024-07-20 19:00:38.206139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.587 [2024-07-20 19:00:38.206153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.587 [2024-07-20 19:00:38.206167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.587 [2024-07-20 19:00:38.206192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.587 [2024-07-20 19:00:38.206206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.587 [2024-07-20 19:00:38.206220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.587 [2024-07-20 19:00:38.206234] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:32.587 [2024-07-20 19:00:38.206278] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:32.587 [2024-07-20 19:00:38.206311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15deeb0 (9): Bad file descriptor 00:30:32.587 [2024-07-20 19:00:38.217358] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:32.587 Running I/O for 1 seconds... 00:30:32.587 00:30:32.587 Latency(us) 00:30:32.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.587 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:32.587 Verification LBA range: start 0x0 length 0x4000 00:30:32.587 NVMe0n1 : 1.00 8548.36 33.39 0.00 0.00 14912.46 2463.67 15437.37 00:30:32.587 =================================================================================================================== 00:30:32.587 Total : 8548.36 33.39 0.00 0.00 14912.46 2463.67 15437.37 00:30:32.587 19:00:42 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:32.587 19:00:42 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:32.587 19:00:42 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:32.854 19:00:43 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:32.854 19:00:43 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:33.110 19:00:43 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:33.366 19:00:43 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:36.638 19:00:46 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:36.638 19:00:46 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:36.638 19:00:46 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1503923 00:30:36.638 19:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1503923 ']' 00:30:36.638 19:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1503923 00:30:36.638 19:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:36.638 19:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:36.638 19:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1503923 00:30:36.638 19:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:36.638 19:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:36.638 19:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1503923' 00:30:36.638 killing process with pid 1503923 00:30:36.638 19:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1503923 00:30:36.639 19:00:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1503923 00:30:36.896 19:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:36.896 19:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:37.153 19:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:37.153 19:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:37.153 19:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:37.153 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:37.153 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:37.153 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:37.153 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:37.153 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:37.153 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:37.153 rmmod nvme_tcp 00:30:37.153 rmmod nvme_fabrics 00:30:37.153 rmmod nvme_keyring 00:30:37.409 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:37.409 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:37.409 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:37.409 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1501760 ']' 00:30:37.409 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1501760 00:30:37.409 19:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1501760 ']' 00:30:37.409 19:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1501760 00:30:37.409 19:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:37.409 19:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:37.409 19:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1501760 00:30:37.409 19:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:37.409 19:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:37.409 19:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1501760' 00:30:37.409 killing process with pid 1501760 00:30:37.409 19:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1501760 00:30:37.410 19:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1501760 00:30:37.676 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:37.676 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:37.676 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:37.676 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:37.676 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:37.676 19:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.676 19:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:37.676 19:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.577 19:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:39.577 00:30:39.577 real 0m34.817s 00:30:39.577 user 1m59.160s 00:30:39.577 sys 0m6.517s 00:30:39.577 19:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:39.577 19:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:39.577 ************************************ 00:30:39.577 END TEST nvmf_failover 00:30:39.577 ************************************ 00:30:39.577 19:00:49 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:39.577 19:00:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:39.577 19:00:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:39.577 19:00:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:39.577 ************************************ 00:30:39.577 START TEST nvmf_host_discovery 00:30:39.577 ************************************ 00:30:39.577 19:00:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:39.577 * Looking for test storage... 00:30:39.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:39.577 19:00:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.577 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:39.577 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.577 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.577 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.577 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.577 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:39.843 19:00:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.741 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:41.742 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:41.742 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:41.742 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:41.742 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.742 19:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:41.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:30:41.742 00:30:41.742 --- 10.0.0.2 ping statistics --- 00:30:41.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.742 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:30:41.742 00:30:41.742 --- 10.0.0.1 ping statistics --- 00:30:41.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.742 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1507303 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1507303 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1507303 ']' 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:41.742 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.001 [2024-07-20 19:00:52.080215] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:42.001 [2024-07-20 19:00:52.080286] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.001 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.001 [2024-07-20 19:00:52.144119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.001 [2024-07-20 19:00:52.229195] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.001 [2024-07-20 19:00:52.229259] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.001 [2024-07-20 19:00:52.229274] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.001 [2024-07-20 19:00:52.229285] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.001 [2024-07-20 19:00:52.229295] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.001 [2024-07-20 19:00:52.229343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.259 [2024-07-20 19:00:52.366955] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.259 [2024-07-20 19:00:52.375189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.259 null0 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.259 null1 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1507329 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1507329 /tmp/host.sock 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1507329 ']' 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:42.259 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:42.259 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:42.260 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:42.260 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.260 [2024-07-20 19:00:52.447841] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:42.260 [2024-07-20 19:00:52.447910] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507329 ] 00:30:42.260 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.260 [2024-07-20 19:00:52.509817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.518 [2024-07-20 19:00:52.601363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:42.518 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:42.775 19:00:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.775 [2024-07-20 19:00:53.016862] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:42.775 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.032 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:43.032 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:43.032 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:43.032 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:43.032 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:43.032 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:43.032 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:43.032 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:30:43.033 19:00:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:30:43.597 [2024-07-20 19:00:53.777063] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:43.597 [2024-07-20 19:00:53.777118] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:43.597 [2024-07-20 19:00:53.777140] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:43.597 [2024-07-20 19:00:53.903548] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:43.855 [2024-07-20 19:00:53.965324] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:43.855 [2024-07-20 19:00:53.965350] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:44.114 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:44.373 19:00:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:45.743 [2024-07-20 19:00:55.684875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:45.743 [2024-07-20 19:00:55.685426] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:45.743 [2024-07-20 19:00:55.685464] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:45.743 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:45.744 19:00:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:30:45.744 [2024-07-20 19:00:55.812283] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:45.744 [2024-07-20 19:00:55.912163] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:45.744 [2024-07-20 19:00:55.912185] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:45.744 [2024-07-20 19:00:55.912212] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.675 [2024-07-20 19:00:56.905275] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:46.675 [2024-07-20 19:00:56.905317] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:46.675 [2024-07-20 19:00:56.908020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.675 [2024-07-20 19:00:56.908052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.675 [2024-07-20 19:00:56.908085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.675 [2024-07-20 19:00:56.908107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.675 [2024-07-20 19:00:56.908121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.675 [2024-07-20 19:00:56.908158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.675 [2024-07-20 19:00:56.908175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.675 [2024-07-20 19:00:56.908192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.675 [2024-07-20 19:00:56.908206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f5450 is same with the state(5) to be set 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:46.675 [2024-07-20 19:00:56.918024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f5450 (9): Bad file descriptor 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.675 [2024-07-20 19:00:56.928092] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:46.675 [2024-07-20 19:00:56.928463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.675 [2024-07-20 19:00:56.928492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f5450 with addr=10.0.0.2, port=4420 00:30:46.675 [2024-07-20 19:00:56.928509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f5450 is same with the state(5) to be set 00:30:46.675 [2024-07-20 19:00:56.928549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f5450 (9): Bad file descriptor 00:30:46.675 [2024-07-20 19:00:56.928589] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:46.675 [2024-07-20 19:00:56.928610] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:46.675 [2024-07-20 19:00:56.928628] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:46.675 [2024-07-20 19:00:56.928652] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.675 [2024-07-20 19:00:56.938171] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:46.675 [2024-07-20 19:00:56.938478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.675 [2024-07-20 19:00:56.938506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f5450 with addr=10.0.0.2, port=4420 00:30:46.675 [2024-07-20 19:00:56.938522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f5450 is same with the state(5) to be set 00:30:46.675 [2024-07-20 19:00:56.938544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f5450 (9): Bad file descriptor 00:30:46.675 [2024-07-20 19:00:56.938584] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:46.675 [2024-07-20 19:00:56.938600] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:46.675 [2024-07-20 19:00:56.938613] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:46.675 [2024-07-20 19:00:56.938665] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.675 [2024-07-20 19:00:56.948250] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:46.675 [2024-07-20 19:00:56.948647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.675 [2024-07-20 19:00:56.948679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f5450 with addr=10.0.0.2, port=4420 00:30:46.675 [2024-07-20 19:00:56.948698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f5450 is same with the state(5) to be set 00:30:46.675 [2024-07-20 19:00:56.948729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f5450 (9): Bad file descriptor 00:30:46.675 [2024-07-20 19:00:56.948781] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:46.675 [2024-07-20 19:00:56.948813] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:46.675 [2024-07-20 19:00:56.948850] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:46.675 [2024-07-20 19:00:56.948871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:46.675 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:46.675 [2024-07-20 19:00:56.958330] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:46.675 [2024-07-20 19:00:56.958722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.675 [2024-07-20 19:00:56.958754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f5450 with addr=10.0.0.2, port=4420 00:30:46.675 [2024-07-20 19:00:56.958773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f5450 is same with the state(5) to be set 00:30:46.675 [2024-07-20 19:00:56.958806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f5450 (9): Bad file descriptor 00:30:46.676 [2024-07-20 19:00:56.959662] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:46.676 [2024-07-20 19:00:56.959689] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:46.676 [2024-07-20 19:00:56.959712] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:46.676 [2024-07-20 19:00:56.959736] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.676 [2024-07-20 19:00:56.968411] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:46.676 [2024-07-20 19:00:56.968710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.676 [2024-07-20 19:00:56.968744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f5450 with addr=10.0.0.2, port=4420 00:30:46.676 [2024-07-20 19:00:56.968766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f5450 is same with the state(5) to be set 00:30:46.676 [2024-07-20 19:00:56.968805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f5450 (9): Bad file descriptor 00:30:46.676 [2024-07-20 19:00:56.968848] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:46.676 [2024-07-20 19:00:56.968867] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:46.676 [2024-07-20 19:00:56.968886] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:46.676 [2024-07-20 19:00:56.968910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.676 [2024-07-20 19:00:56.978491] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:46.676 [2024-07-20 19:00:56.978784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.676 [2024-07-20 19:00:56.978820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f5450 with addr=10.0.0.2, port=4420 00:30:46.676 [2024-07-20 19:00:56.978838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f5450 is same with the state(5) to be set 00:30:46.676 [2024-07-20 19:00:56.978860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f5450 (9): Bad file descriptor 00:30:46.676 [2024-07-20 19:00:56.978882] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:46.676 [2024-07-20 19:00:56.978897] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:46.676 [2024-07-20 19:00:56.978911] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:46.676 [2024-07-20 19:00:56.978944] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.676 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.676 [2024-07-20 19:00:56.988568] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:46.676 [2024-07-20 19:00:56.988940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.676 [2024-07-20 19:00:56.988968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f5450 with addr=10.0.0.2, port=4420 00:30:46.676 [2024-07-20 19:00:56.988984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f5450 is same with the state(5) to be set 00:30:46.676 [2024-07-20 19:00:56.989007] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f5450 (9): Bad file descriptor 00:30:46.676 [2024-07-20 19:00:56.989040] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:46.676 [2024-07-20 19:00:56.989059] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:46.676 [2024-07-20 19:00:56.989081] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:46.676 [2024-07-20 19:00:56.989101] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.676 [2024-07-20 19:00:56.993043] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:46.676 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:46.676 [2024-07-20 19:00:56.993099] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:46.676 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:46.676 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:46.676 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:46.676 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:46.676 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:46.676 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:46.676 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:46.676 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:46.676 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.676 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:46.676 19:00:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.676 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:46.676 19:00:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.933 19:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.303 [2024-07-20 19:00:58.238588] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:48.303 [2024-07-20 19:00:58.238613] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:48.303 [2024-07-20 19:00:58.238637] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:48.303 [2024-07-20 19:00:58.326953] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:48.303 [2024-07-20 19:00:58.593977] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:48.303 [2024-07-20 19:00:58.594011] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.303 request: 00:30:48.303 { 00:30:48.303 "name": "nvme", 00:30:48.303 "trtype": "tcp", 00:30:48.303 "traddr": "10.0.0.2", 00:30:48.303 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:48.303 "adrfam": "ipv4", 00:30:48.303 "trsvcid": "8009", 00:30:48.303 "wait_for_attach": true, 00:30:48.303 "method": "bdev_nvme_start_discovery", 00:30:48.303 "req_id": 1 00:30:48.303 } 00:30:48.303 Got JSON-RPC error response 00:30:48.303 response: 00:30:48.303 { 00:30:48.303 "code": -17, 00:30:48.303 "message": "File exists" 00:30:48.303 } 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:48.303 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.560 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:48.560 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:48.560 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.560 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.560 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.560 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:48.560 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.560 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:48.560 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.560 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.561 request: 00:30:48.561 { 00:30:48.561 "name": "nvme_second", 00:30:48.561 "trtype": "tcp", 00:30:48.561 "traddr": "10.0.0.2", 00:30:48.561 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:48.561 "adrfam": "ipv4", 00:30:48.561 "trsvcid": "8009", 00:30:48.561 "wait_for_attach": true, 00:30:48.561 "method": "bdev_nvme_start_discovery", 00:30:48.561 "req_id": 1 00:30:48.561 } 00:30:48.561 Got JSON-RPC error response 00:30:48.561 response: 00:30:48.561 { 00:30:48.561 "code": -17, 00:30:48.561 "message": "File exists" 00:30:48.561 } 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.561 19:00:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.492 [2024-07-20 19:00:59.798299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.492 [2024-07-20 19:00:59.798365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f1500 with addr=10.0.0.2, port=8010 00:30:49.492 [2024-07-20 19:00:59.798398] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:49.492 [2024-07-20 19:00:59.798424] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:49.492 [2024-07-20 19:00:59.798439] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:50.864 [2024-07-20 19:01:00.800692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.864 [2024-07-20 19:01:00.800757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f1500 with addr=10.0.0.2, port=8010 00:30:50.864 [2024-07-20 19:01:00.800791] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:50.864 [2024-07-20 19:01:00.800819] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:50.864 [2024-07-20 19:01:00.800849] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:51.799 [2024-07-20 19:01:01.802762] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:51.799 request: 00:30:51.799 { 00:30:51.799 "name": "nvme_second", 00:30:51.799 "trtype": "tcp", 00:30:51.799 "traddr": "10.0.0.2", 00:30:51.799 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:51.799 "adrfam": "ipv4", 00:30:51.799 "trsvcid": "8010", 00:30:51.799 "attach_timeout_ms": 3000, 00:30:51.799 "method": "bdev_nvme_start_discovery", 00:30:51.799 "req_id": 1 00:30:51.799 } 00:30:51.799 Got JSON-RPC error response 00:30:51.799 response: 00:30:51.799 { 00:30:51.799 "code": -110, 00:30:51.799 "message": "Connection timed out" 00:30:51.799 } 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1507329 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:51.799 rmmod nvme_tcp 00:30:51.799 rmmod nvme_fabrics 00:30:51.799 rmmod nvme_keyring 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1507303 ']' 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1507303 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 1507303 ']' 00:30:51.799 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 1507303 00:30:51.800 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:30:51.800 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:51.800 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1507303 00:30:51.800 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:51.800 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:51.800 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1507303' 00:30:51.800 killing process with pid 1507303 00:30:51.800 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 1507303 00:30:51.800 19:01:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 1507303 00:30:52.060 19:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:52.060 19:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:52.060 19:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:52.060 19:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:52.060 19:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:52.060 19:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.060 19:01:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:52.060 19:01:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.958 19:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:53.958 00:30:53.958 real 0m14.372s 00:30:53.958 user 0m21.397s 00:30:53.958 sys 0m2.939s 00:30:53.958 19:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:53.958 19:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.958 ************************************ 00:30:53.958 END TEST nvmf_host_discovery 00:30:53.958 ************************************ 00:30:53.958 19:01:04 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:53.958 19:01:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:53.958 19:01:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:53.958 19:01:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:53.958 ************************************ 00:30:53.958 START TEST nvmf_host_multipath_status 00:30:53.958 ************************************ 00:30:53.958 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:54.220 * Looking for test storage... 00:30:54.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.220 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:54.221 19:01:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:56.118 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:56.118 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:56.118 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:56.118 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:56.118 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:56.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:56.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:30:56.377 00:30:56.377 --- 10.0.0.2 ping statistics --- 00:30:56.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.377 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:56.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:56.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:30:56.377 00:30:56.377 --- 10.0.0.1 ping statistics --- 00:30:56.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.377 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1510495 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1510495 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1510495 ']' 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:56.377 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:56.377 [2024-07-20 19:01:06.534168] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:56.377 [2024-07-20 19:01:06.534239] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.377 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.377 [2024-07-20 19:01:06.599418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:56.377 [2024-07-20 19:01:06.687259] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.377 [2024-07-20 19:01:06.687328] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.377 [2024-07-20 19:01:06.687341] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.377 [2024-07-20 19:01:06.687353] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.377 [2024-07-20 19:01:06.687364] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.377 [2024-07-20 19:01:06.687453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.377 [2024-07-20 19:01:06.687458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.635 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:56.635 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:30:56.635 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:56.635 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:56.635 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:56.636 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:56.636 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1510495 00:30:56.636 19:01:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:56.894 [2024-07-20 19:01:07.092541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.894 19:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:57.152 Malloc0 00:30:57.152 19:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:57.409 19:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:57.666 19:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.231 [2024-07-20 19:01:08.255118] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.231 19:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:58.231 [2024-07-20 19:01:08.519860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:58.231 19:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1510778 00:30:58.231 19:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:58.231 19:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:58.231 19:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1510778 /var/tmp/bdevperf.sock 00:30:58.231 19:01:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1510778 ']' 00:30:58.231 19:01:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:58.231 19:01:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:58.231 19:01:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:58.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:58.231 19:01:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:58.231 19:01:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:58.796 19:01:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:58.796 19:01:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:30:58.796 19:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:58.796 19:01:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:59.362 Nvme0n1 00:30:59.362 19:01:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:59.955 Nvme0n1 00:30:59.955 19:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:59.955 19:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:01.864 19:01:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:01.864 19:01:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:02.134 19:01:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:02.396 19:01:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:03.325 19:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:03.325 19:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:03.325 19:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.325 19:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:03.584 19:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.584 19:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:03.584 19:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.584 19:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:03.842 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:03.842 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:03.842 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.842 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:04.099 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:04.099 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:04.099 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.099 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:04.407 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:04.407 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:04.407 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.407 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:04.664 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:04.664 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:04.664 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.664 19:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:04.921 19:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:04.921 19:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:04.921 19:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:05.177 19:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:05.434 19:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:06.365 19:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:06.365 19:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:06.365 19:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.365 19:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:06.621 19:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:06.621 19:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:06.621 19:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.621 19:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:06.879 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.879 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:06.879 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.879 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:07.137 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.137 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:07.137 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.137 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:07.445 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.445 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:07.445 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.445 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:07.703 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.703 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:07.703 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.703 19:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:07.960 19:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.960 19:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:07.960 19:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:08.218 19:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:08.475 19:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:09.406 19:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:09.406 19:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:09.406 19:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.406 19:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:09.664 19:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.664 19:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:09.664 19:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.664 19:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:09.921 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:09.921 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:09.921 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.921 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:10.178 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:10.178 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:10.178 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:10.178 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:10.435 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:10.435 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:10.435 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:10.435 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:10.693 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:10.693 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:10.693 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:10.693 19:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:10.950 19:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:10.950 19:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:10.950 19:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:11.207 19:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:11.464 19:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:12.399 19:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:12.399 19:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:12.399 19:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.399 19:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:12.657 19:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.657 19:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:12.657 19:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.657 19:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:12.913 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:12.913 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:12.913 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.913 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:13.170 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.171 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:13.171 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.171 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:13.428 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.428 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:13.428 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.428 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:13.686 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.686 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:13.686 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.686 19:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:13.947 19:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:13.947 19:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:13.947 19:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:14.205 19:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:14.461 19:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:15.391 19:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:15.391 19:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:15.391 19:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.391 19:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:15.649 19:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:15.649 19:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:15.649 19:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.649 19:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:15.906 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:15.906 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:15.906 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.906 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:16.164 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.164 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:16.164 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.164 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:16.421 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.421 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:16.421 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.421 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:16.679 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:16.679 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:16.679 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.679 19:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:16.936 19:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:16.936 19:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:16.936 19:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:17.194 19:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:17.452 19:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:18.388 19:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:18.388 19:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:18.388 19:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.388 19:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:18.645 19:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:18.645 19:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:18.645 19:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.645 19:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:18.903 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.903 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:18.903 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.903 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:19.160 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.160 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:19.160 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.160 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:19.456 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.456 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:19.456 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.456 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:19.714 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:19.714 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:19.714 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.714 19:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:19.971 19:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.971 19:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:20.227 19:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:20.227 19:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:20.497 19:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:20.753 19:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:21.686 19:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:21.686 19:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:21.686 19:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.686 19:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:21.943 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.943 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:21.943 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.943 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:22.200 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.200 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:22.200 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.200 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:22.461 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.461 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:22.461 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.461 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:22.721 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.721 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:22.721 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.721 19:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:22.978 19:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.978 19:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:22.978 19:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.978 19:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:23.237 19:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.237 19:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:23.237 19:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:23.494 19:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:23.752 19:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:24.684 19:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:24.684 19:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:24.684 19:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.684 19:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:24.941 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:24.941 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:24.941 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.941 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:25.207 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.207 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:25.207 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.207 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:25.468 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.468 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:25.468 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.468 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:25.726 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.726 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:25.726 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.726 19:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:25.984 19:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.984 19:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:25.984 19:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.984 19:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:26.242 19:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.242 19:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:26.242 19:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:26.500 19:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:26.772 19:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:27.705 19:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:27.705 19:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:27.705 19:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.705 19:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:27.962 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.962 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:27.962 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.962 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:28.219 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.219 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:28.219 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.219 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:28.477 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.477 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:28.477 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.477 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:28.735 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.735 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:28.735 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.735 19:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:28.992 19:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.992 19:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:28.992 19:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.992 19:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:29.250 19:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.250 19:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:29.250 19:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:29.507 19:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:29.765 19:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:30.709 19:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:30.709 19:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:30.709 19:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.709 19:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:30.966 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.966 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:30.966 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.966 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:31.224 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:31.224 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:31.224 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.224 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:31.480 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.480 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:31.480 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.480 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:31.739 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.739 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:31.739 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.739 19:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:32.002 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.002 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:32.002 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.002 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:32.259 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:32.259 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1510778 00:31:32.259 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1510778 ']' 00:31:32.259 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1510778 00:31:32.259 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:32.259 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:32.259 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1510778 00:31:32.259 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:32.259 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:32.259 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1510778' 00:31:32.259 killing process with pid 1510778 00:31:32.259 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1510778 00:31:32.259 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1510778 00:31:32.259 Connection closed with partial response: 00:31:32.259 00:31:32.259 00:31:32.518 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1510778 00:31:32.518 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:32.518 [2024-07-20 19:01:08.577923] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:32.518 [2024-07-20 19:01:08.578019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510778 ] 00:31:32.518 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.518 [2024-07-20 19:01:08.637701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.518 [2024-07-20 19:01:08.725660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:32.518 Running I/O for 90 seconds... 00:31:32.518 [2024-07-20 19:01:24.397351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.518 [2024-07-20 19:01:24.397407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:32.518 [2024-07-20 19:01:24.397467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.518 [2024-07-20 19:01:24.397487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:32.518 [2024-07-20 19:01:24.397511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.518 [2024-07-20 19:01:24.397528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:32.518 [2024-07-20 19:01:24.397551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.518 [2024-07-20 19:01:24.397568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:32.518 [2024-07-20 19:01:24.397606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.518 [2024-07-20 19:01:24.397622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:32.518 [2024-07-20 19:01:24.397660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.518 [2024-07-20 19:01:24.397678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:32.518 [2024-07-20 19:01:24.397701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.518 [2024-07-20 19:01:24.397718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:32.518 [2024-07-20 19:01:24.397741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.518 [2024-07-20 19:01:24.397758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:32.518 [2024-07-20 19:01:24.398334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.518 [2024-07-20 19:01:24.398359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:32.518 [2024-07-20 19:01:24.398389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.518 [2024-07-20 19:01:24.398409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:32.518 [2024-07-20 19:01:24.398433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.398464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.398490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.398507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.398531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.398548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.398572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.398590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.398614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.398631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.398655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.398672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.399772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.399806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.399836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.399854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.399880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.399898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.399923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.399940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.399965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.399982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.400964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.400982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.401966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.401994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.402015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.402043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.402061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.402089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.402120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.402147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.402164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.402191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.402208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.402235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.402251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.402278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.402295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.402321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.402338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.402364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.402380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.402408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.519 [2024-07-20 19:01:24.402425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:32.519 [2024-07-20 19:01:24.402451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.402468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.402494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.402511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.402537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.402554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.402585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.402602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.402629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.402645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.402671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.402689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.402715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.402731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.402758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.402774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.402823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.402842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.402871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.402889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.402917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.402934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.402961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.402978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.403005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.403022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.403049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.403066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.403093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.403124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.403279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.403300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.403334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.403351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.403382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.403398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.403428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.403445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.403476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.403493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.403522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.403539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.403569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.403585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.403616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.403632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:24.403662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:24.403679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.869493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.869559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.869621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.869648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.869673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.869696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.869719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.869762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.869788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.869814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.869840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.869858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.869881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.869898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.869920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.869938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.869962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.869979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.870001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.870018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.870041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.870060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.870084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.870102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.870126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.870143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.870167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.870185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.870209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.870225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.870248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.870269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.870293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.870311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.871536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.871562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.871590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.871609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.871633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.871650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.871672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.871689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.871712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.871728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.871751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.871768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.871790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.871814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.871838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.871855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.871877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.871894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.871916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.871933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.871957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.871973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.872001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.872019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.872042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.520 [2024-07-20 19:01:39.872059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.872082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.520 [2024-07-20 19:01:39.872099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.872121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.520 [2024-07-20 19:01:39.872153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.872176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.872192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:32.520 [2024-07-20 19:01:39.872214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.520 [2024-07-20 19:01:39.872231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.521 [2024-07-20 19:01:39.872829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.872932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.872949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.874484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.874507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.874533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.521 [2024-07-20 19:01:39.874555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.874579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.521 [2024-07-20 19:01:39.874595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.874616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.521 [2024-07-20 19:01:39.874632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.874653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.874668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.874689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.874705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.874726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.874741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.874762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.874802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.874828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.874845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.874868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.874884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.874907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.521 [2024-07-20 19:01:39.874923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.874945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.521 [2024-07-20 19:01:39.874962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:32.521 [2024-07-20 19:01:39.874984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.521 [2024-07-20 19:01:39.875001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:32.521 Received shutdown signal, test time was about 32.210398 seconds 00:31:32.521 00:31:32.521 Latency(us) 00:31:32.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.521 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:32.521 Verification LBA range: start 0x0 length 0x4000 00:31:32.521 Nvme0n1 : 32.21 8107.68 31.67 0.00 0.00 15762.22 376.23 4026531.84 00:31:32.521 =================================================================================================================== 00:31:32.521 Total : 8107.68 31.67 0.00 0.00 15762.22 376.23 4026531.84 00:31:32.521 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:32.778 rmmod nvme_tcp 00:31:32.778 rmmod nvme_fabrics 00:31:32.778 rmmod nvme_keyring 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1510495 ']' 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1510495 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1510495 ']' 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1510495 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:32.778 19:01:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1510495 00:31:32.778 19:01:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:32.778 19:01:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:32.778 19:01:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1510495' 00:31:32.778 killing process with pid 1510495 00:31:32.778 19:01:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1510495 00:31:32.778 19:01:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1510495 00:31:33.035 19:01:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:33.035 19:01:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:33.035 19:01:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:33.035 19:01:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:33.035 19:01:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:33.035 19:01:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.035 19:01:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:33.035 19:01:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.564 19:01:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:35.564 00:31:35.564 real 0m41.039s 00:31:35.564 user 2m2.664s 00:31:35.564 sys 0m10.908s 00:31:35.564 19:01:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:35.564 19:01:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:35.564 ************************************ 00:31:35.564 END TEST nvmf_host_multipath_status 00:31:35.564 ************************************ 00:31:35.564 19:01:45 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:35.564 19:01:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:35.564 19:01:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:35.564 19:01:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:35.564 ************************************ 00:31:35.564 START TEST nvmf_discovery_remove_ifc 00:31:35.564 ************************************ 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:35.564 * Looking for test storage... 00:31:35.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:35.564 19:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:37.506 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:37.506 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:37.506 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:37.506 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:37.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:37.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:31:37.506 00:31:37.506 --- 10.0.0.2 ping statistics --- 00:31:37.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.506 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:37.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:37.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:31:37.506 00:31:37.506 --- 10.0.0.1 ping statistics --- 00:31:37.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.506 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1516963 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1516963 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1516963 ']' 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:37.506 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:37.506 [2024-07-20 19:01:47.599831] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:37.507 [2024-07-20 19:01:47.599905] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:37.507 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.507 [2024-07-20 19:01:47.668104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.507 [2024-07-20 19:01:47.763389] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:37.507 [2024-07-20 19:01:47.763451] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:37.507 [2024-07-20 19:01:47.763468] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:37.507 [2024-07-20 19:01:47.763482] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:37.507 [2024-07-20 19:01:47.763495] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:37.507 [2024-07-20 19:01:47.763526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:37.765 [2024-07-20 19:01:47.922267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.765 [2024-07-20 19:01:47.930464] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:37.765 null0 00:31:37.765 [2024-07-20 19:01:47.962407] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1516985 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1516985 /tmp/host.sock 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1516985 ']' 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:37.765 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:37.765 19:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:37.765 [2024-07-20 19:01:48.029969] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:37.765 [2024-07-20 19:01:48.030062] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516985 ] 00:31:37.765 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.023 [2024-07-20 19:01:48.092151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.023 [2024-07-20 19:01:48.182312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.023 19:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:38.023 19:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:31:38.023 19:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:38.023 19:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:38.023 19:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.023 19:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:38.023 19:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.023 19:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:38.023 19:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.023 19:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:38.023 19:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.023 19:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:38.023 19:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.023 19:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:39.391 [2024-07-20 19:01:49.393033] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:39.391 [2024-07-20 19:01:49.393073] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:39.391 [2024-07-20 19:01:49.393110] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:39.391 [2024-07-20 19:01:49.480389] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:39.391 [2024-07-20 19:01:49.662480] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:39.391 [2024-07-20 19:01:49.662555] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:39.391 [2024-07-20 19:01:49.662595] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:39.391 [2024-07-20 19:01:49.662624] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:39.391 [2024-07-20 19:01:49.662664] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:39.391 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.391 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:39.391 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:39.391 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:39.391 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:39.391 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.391 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:39.391 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:39.391 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:39.391 [2024-07-20 19:01:49.669379] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ddbdf0 was disconnected and freed. delete nvme_qpair. 00:31:39.391 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.391 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:39.391 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:39.391 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:39.647 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:39.647 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:39.647 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:39.647 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:39.648 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.648 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:39.648 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:39.648 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:39.648 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.648 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:39.648 19:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:40.579 19:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:40.579 19:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:40.579 19:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:40.579 19:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.579 19:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:40.579 19:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:40.579 19:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:40.579 19:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.579 19:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:40.579 19:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:41.948 19:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:41.948 19:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:41.948 19:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:41.948 19:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.948 19:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:41.948 19:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:41.948 19:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:41.948 19:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.948 19:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:41.948 19:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:42.880 19:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:42.880 19:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:42.880 19:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:42.880 19:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.880 19:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:42.880 19:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:42.880 19:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:42.880 19:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.880 19:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:42.880 19:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:43.811 19:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:43.811 19:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:43.811 19:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.811 19:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:43.811 19:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:43.811 19:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:43.811 19:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:43.811 19:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.811 19:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:43.811 19:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:44.740 19:01:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:44.740 19:01:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:44.740 19:01:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:44.740 19:01:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.740 19:01:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:44.740 19:01:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:44.740 19:01:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:44.740 19:01:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.740 19:01:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:44.740 19:01:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:44.996 [2024-07-20 19:01:55.103627] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:44.996 [2024-07-20 19:01:55.103703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:44.996 [2024-07-20 19:01:55.103727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:44.996 [2024-07-20 19:01:55.103753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:44.996 [2024-07-20 19:01:55.103769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:44.996 [2024-07-20 19:01:55.103785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:44.996 [2024-07-20 19:01:55.103811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:44.996 [2024-07-20 19:01:55.103828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:44.996 [2024-07-20 19:01:55.103865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:44.996 [2024-07-20 19:01:55.103880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:44.996 [2024-07-20 19:01:55.103894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:44.996 [2024-07-20 19:01:55.103908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2f80 is same with the state(5) to be set 00:31:44.996 [2024-07-20 19:01:55.113645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da2f80 (9): Bad file descriptor 00:31:44.996 [2024-07-20 19:01:55.123688] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:45.939 19:01:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:45.939 19:01:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:45.939 19:01:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:45.939 19:01:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.939 19:01:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:45.939 19:01:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:45.939 19:01:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:45.939 [2024-07-20 19:01:56.137838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:45.939 [2024-07-20 19:01:56.137917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da2f80 with addr=10.0.0.2, port=4420 00:31:45.939 [2024-07-20 19:01:56.137956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2f80 is same with the state(5) to be set 00:31:45.939 [2024-07-20 19:01:56.138018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da2f80 (9): Bad file descriptor 00:31:45.939 [2024-07-20 19:01:56.138494] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.939 [2024-07-20 19:01:56.138530] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:45.939 [2024-07-20 19:01:56.138548] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:45.939 [2024-07-20 19:01:56.138567] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:45.939 [2024-07-20 19:01:56.138604] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.939 [2024-07-20 19:01:56.138623] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:45.939 19:01:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.939 19:01:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:45.939 19:01:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:46.873 [2024-07-20 19:01:57.141120] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:46.873 [2024-07-20 19:01:57.141162] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:46.873 [2024-07-20 19:01:57.141179] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:46.873 [2024-07-20 19:01:57.141194] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:46.873 [2024-07-20 19:01:57.141217] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.873 [2024-07-20 19:01:57.141253] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:46.873 [2024-07-20 19:01:57.141301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.873 [2024-07-20 19:01:57.141325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-07-20 19:01:57.141347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.873 [2024-07-20 19:01:57.141363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-07-20 19:01:57.141379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.873 [2024-07-20 19:01:57.141396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-07-20 19:01:57.141413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.873 [2024-07-20 19:01:57.141430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-07-20 19:01:57.141446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.873 [2024-07-20 19:01:57.141462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-07-20 19:01:57.141477] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:46.873 [2024-07-20 19:01:57.141628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da2410 (9): Bad file descriptor 00:31:46.873 [2024-07-20 19:01:57.142647] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:46.873 [2024-07-20 19:01:57.142674] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:46.873 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:46.873 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:46.873 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:46.873 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.873 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:46.873 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:46.873 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:46.873 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.873 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:46.873 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:47.130 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:47.130 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:47.130 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:47.130 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:47.130 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:47.130 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.130 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.130 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:47.130 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:47.130 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.130 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:47.130 19:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:48.062 19:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:48.062 19:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:48.062 19:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:48.062 19:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.062 19:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:48.062 19:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:48.062 19:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:48.062 19:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.062 19:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:48.062 19:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:48.993 [2024-07-20 19:01:59.239086] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:48.993 [2024-07-20 19:01:59.239111] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:48.993 [2024-07-20 19:01:59.239151] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:49.251 19:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:49.251 19:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.251 19:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:49.251 19:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.251 19:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:49.251 19:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:49.251 19:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:49.251 19:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.251 [2024-07-20 19:01:59.365593] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:49.251 19:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:49.251 19:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:49.251 [2024-07-20 19:01:59.548133] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:49.251 [2024-07-20 19:01:59.548185] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:49.251 [2024-07-20 19:01:59.548217] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:49.251 [2024-07-20 19:01:59.548240] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:49.251 [2024-07-20 19:01:59.548253] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:49.251 [2024-07-20 19:01:59.555779] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1dafd30 was disconnected and freed. delete nvme_qpair. 00:31:50.185 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:50.185 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:50.185 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:50.185 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.185 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:50.185 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:50.185 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:50.185 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.185 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:50.186 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:50.186 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1516985 00:31:50.186 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1516985 ']' 00:31:50.186 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1516985 00:31:50.186 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:31:50.186 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:50.186 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1516985 00:31:50.186 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:50.186 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:50.186 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1516985' 00:31:50.186 killing process with pid 1516985 00:31:50.186 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1516985 00:31:50.186 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1516985 00:31:50.443 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:50.443 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:50.443 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:50.443 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:50.443 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:50.443 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:50.443 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:50.443 rmmod nvme_tcp 00:31:50.443 rmmod nvme_fabrics 00:31:50.444 rmmod nvme_keyring 00:31:50.444 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:50.444 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:50.444 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:50.444 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1516963 ']' 00:31:50.444 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1516963 00:31:50.444 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1516963 ']' 00:31:50.444 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1516963 00:31:50.444 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:31:50.444 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:50.444 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1516963 00:31:50.444 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:50.444 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:50.444 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1516963' 00:31:50.444 killing process with pid 1516963 00:31:50.444 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1516963 00:31:50.444 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1516963 00:31:50.702 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:50.702 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:50.702 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:50.702 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:50.702 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:50.702 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.702 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:50.702 19:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.232 19:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:53.232 00:31:53.232 real 0m17.668s 00:31:53.232 user 0m25.595s 00:31:53.232 sys 0m3.057s 00:31:53.232 19:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:53.232 19:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:53.232 ************************************ 00:31:53.232 END TEST nvmf_discovery_remove_ifc 00:31:53.232 ************************************ 00:31:53.232 19:02:03 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:53.232 19:02:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:53.232 19:02:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:53.232 19:02:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:53.232 ************************************ 00:31:53.232 START TEST nvmf_identify_kernel_target 00:31:53.232 ************************************ 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:53.232 * Looking for test storage... 00:31:53.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:53.232 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.233 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:53.233 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.233 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:53.233 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:53.233 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:53.233 19:02:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.131 19:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:55.131 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:55.131 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:55.131 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:55.132 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:55.132 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:55.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:31:55.132 00:31:55.132 --- 10.0.0.2 ping statistics --- 00:31:55.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.132 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:31:55.132 00:31:55.132 --- 10.0.0.1 ping statistics --- 00:31:55.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.132 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:55.132 19:02:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:56.065 Waiting for block devices as requested 00:31:56.065 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:56.322 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:56.322 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:56.322 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:56.322 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:56.580 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:56.580 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:56.580 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:56.580 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:56.580 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:56.837 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:56.837 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:56.838 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:57.095 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:57.095 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:57.095 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:57.095 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:57.353 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:57.353 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:57.353 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:57.353 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:31:57.353 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:57.353 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:57.354 No valid GPT data, bailing 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:57.354 00:31:57.354 Discovery Log Number of Records 2, Generation counter 2 00:31:57.354 =====Discovery Log Entry 0====== 00:31:57.354 trtype: tcp 00:31:57.354 adrfam: ipv4 00:31:57.354 subtype: current discovery subsystem 00:31:57.354 treq: not specified, sq flow control disable supported 00:31:57.354 portid: 1 00:31:57.354 trsvcid: 4420 00:31:57.354 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:57.354 traddr: 10.0.0.1 00:31:57.354 eflags: none 00:31:57.354 sectype: none 00:31:57.354 =====Discovery Log Entry 1====== 00:31:57.354 trtype: tcp 00:31:57.354 adrfam: ipv4 00:31:57.354 subtype: nvme subsystem 00:31:57.354 treq: not specified, sq flow control disable supported 00:31:57.354 portid: 1 00:31:57.354 trsvcid: 4420 00:31:57.354 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:57.354 traddr: 10.0.0.1 00:31:57.354 eflags: none 00:31:57.354 sectype: none 00:31:57.354 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:57.354 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:57.354 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.354 ===================================================== 00:31:57.354 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:57.354 ===================================================== 00:31:57.354 Controller Capabilities/Features 00:31:57.354 ================================ 00:31:57.354 Vendor ID: 0000 00:31:57.354 Subsystem Vendor ID: 0000 00:31:57.354 Serial Number: b961a01676296f38f4f8 00:31:57.354 Model Number: Linux 00:31:57.354 Firmware Version: 6.7.0-68 00:31:57.354 Recommended Arb Burst: 0 00:31:57.354 IEEE OUI Identifier: 00 00 00 00:31:57.354 Multi-path I/O 00:31:57.354 May have multiple subsystem ports: No 00:31:57.354 May have multiple controllers: No 00:31:57.354 Associated with SR-IOV VF: No 00:31:57.354 Max Data Transfer Size: Unlimited 00:31:57.354 Max Number of Namespaces: 0 00:31:57.354 Max Number of I/O Queues: 1024 00:31:57.354 NVMe Specification Version (VS): 1.3 00:31:57.354 NVMe Specification Version (Identify): 1.3 00:31:57.354 Maximum Queue Entries: 1024 00:31:57.354 Contiguous Queues Required: No 00:31:57.354 Arbitration Mechanisms Supported 00:31:57.354 Weighted Round Robin: Not Supported 00:31:57.354 Vendor Specific: Not Supported 00:31:57.354 Reset Timeout: 7500 ms 00:31:57.354 Doorbell Stride: 4 bytes 00:31:57.354 NVM Subsystem Reset: Not Supported 00:31:57.354 Command Sets Supported 00:31:57.354 NVM Command Set: Supported 00:31:57.354 Boot Partition: Not Supported 00:31:57.354 Memory Page Size Minimum: 4096 bytes 00:31:57.354 Memory Page Size Maximum: 4096 bytes 00:31:57.354 Persistent Memory Region: Not Supported 00:31:57.354 Optional Asynchronous Events Supported 00:31:57.354 Namespace Attribute Notices: Not Supported 00:31:57.354 Firmware Activation Notices: Not Supported 00:31:57.354 ANA Change Notices: Not Supported 00:31:57.354 PLE Aggregate Log Change Notices: Not Supported 00:31:57.354 LBA Status Info Alert Notices: Not Supported 00:31:57.354 EGE Aggregate Log Change Notices: Not Supported 00:31:57.354 Normal NVM Subsystem Shutdown event: Not Supported 00:31:57.354 Zone Descriptor Change Notices: Not Supported 00:31:57.354 Discovery Log Change Notices: Supported 00:31:57.354 Controller Attributes 00:31:57.354 128-bit Host Identifier: Not Supported 00:31:57.354 Non-Operational Permissive Mode: Not Supported 00:31:57.354 NVM Sets: Not Supported 00:31:57.354 Read Recovery Levels: Not Supported 00:31:57.354 Endurance Groups: Not Supported 00:31:57.354 Predictable Latency Mode: Not Supported 00:31:57.354 Traffic Based Keep ALive: Not Supported 00:31:57.354 Namespace Granularity: Not Supported 00:31:57.354 SQ Associations: Not Supported 00:31:57.354 UUID List: Not Supported 00:31:57.354 Multi-Domain Subsystem: Not Supported 00:31:57.354 Fixed Capacity Management: Not Supported 00:31:57.354 Variable Capacity Management: Not Supported 00:31:57.354 Delete Endurance Group: Not Supported 00:31:57.354 Delete NVM Set: Not Supported 00:31:57.354 Extended LBA Formats Supported: Not Supported 00:31:57.354 Flexible Data Placement Supported: Not Supported 00:31:57.354 00:31:57.354 Controller Memory Buffer Support 00:31:57.354 ================================ 00:31:57.354 Supported: No 00:31:57.354 00:31:57.354 Persistent Memory Region Support 00:31:57.354 ================================ 00:31:57.354 Supported: No 00:31:57.354 00:31:57.354 Admin Command Set Attributes 00:31:57.354 ============================ 00:31:57.354 Security Send/Receive: Not Supported 00:31:57.354 Format NVM: Not Supported 00:31:57.354 Firmware Activate/Download: Not Supported 00:31:57.354 Namespace Management: Not Supported 00:31:57.354 Device Self-Test: Not Supported 00:31:57.354 Directives: Not Supported 00:31:57.354 NVMe-MI: Not Supported 00:31:57.354 Virtualization Management: Not Supported 00:31:57.354 Doorbell Buffer Config: Not Supported 00:31:57.354 Get LBA Status Capability: Not Supported 00:31:57.354 Command & Feature Lockdown Capability: Not Supported 00:31:57.354 Abort Command Limit: 1 00:31:57.354 Async Event Request Limit: 1 00:31:57.354 Number of Firmware Slots: N/A 00:31:57.354 Firmware Slot 1 Read-Only: N/A 00:31:57.354 Firmware Activation Without Reset: N/A 00:31:57.354 Multiple Update Detection Support: N/A 00:31:57.354 Firmware Update Granularity: No Information Provided 00:31:57.354 Per-Namespace SMART Log: No 00:31:57.354 Asymmetric Namespace Access Log Page: Not Supported 00:31:57.354 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:57.354 Command Effects Log Page: Not Supported 00:31:57.354 Get Log Page Extended Data: Supported 00:31:57.354 Telemetry Log Pages: Not Supported 00:31:57.354 Persistent Event Log Pages: Not Supported 00:31:57.354 Supported Log Pages Log Page: May Support 00:31:57.354 Commands Supported & Effects Log Page: Not Supported 00:31:57.354 Feature Identifiers & Effects Log Page:May Support 00:31:57.354 NVMe-MI Commands & Effects Log Page: May Support 00:31:57.354 Data Area 4 for Telemetry Log: Not Supported 00:31:57.354 Error Log Page Entries Supported: 1 00:31:57.354 Keep Alive: Not Supported 00:31:57.354 00:31:57.354 NVM Command Set Attributes 00:31:57.354 ========================== 00:31:57.354 Submission Queue Entry Size 00:31:57.354 Max: 1 00:31:57.354 Min: 1 00:31:57.354 Completion Queue Entry Size 00:31:57.354 Max: 1 00:31:57.354 Min: 1 00:31:57.354 Number of Namespaces: 0 00:31:57.354 Compare Command: Not Supported 00:31:57.354 Write Uncorrectable Command: Not Supported 00:31:57.354 Dataset Management Command: Not Supported 00:31:57.354 Write Zeroes Command: Not Supported 00:31:57.354 Set Features Save Field: Not Supported 00:31:57.354 Reservations: Not Supported 00:31:57.354 Timestamp: Not Supported 00:31:57.354 Copy: Not Supported 00:31:57.354 Volatile Write Cache: Not Present 00:31:57.354 Atomic Write Unit (Normal): 1 00:31:57.354 Atomic Write Unit (PFail): 1 00:31:57.354 Atomic Compare & Write Unit: 1 00:31:57.354 Fused Compare & Write: Not Supported 00:31:57.355 Scatter-Gather List 00:31:57.355 SGL Command Set: Supported 00:31:57.355 SGL Keyed: Not Supported 00:31:57.355 SGL Bit Bucket Descriptor: Not Supported 00:31:57.355 SGL Metadata Pointer: Not Supported 00:31:57.355 Oversized SGL: Not Supported 00:31:57.355 SGL Metadata Address: Not Supported 00:31:57.355 SGL Offset: Supported 00:31:57.355 Transport SGL Data Block: Not Supported 00:31:57.355 Replay Protected Memory Block: Not Supported 00:31:57.355 00:31:57.355 Firmware Slot Information 00:31:57.355 ========================= 00:31:57.355 Active slot: 0 00:31:57.355 00:31:57.355 00:31:57.355 Error Log 00:31:57.355 ========= 00:31:57.355 00:31:57.355 Active Namespaces 00:31:57.355 ================= 00:31:57.355 Discovery Log Page 00:31:57.355 ================== 00:31:57.355 Generation Counter: 2 00:31:57.355 Number of Records: 2 00:31:57.355 Record Format: 0 00:31:57.355 00:31:57.355 Discovery Log Entry 0 00:31:57.355 ---------------------- 00:31:57.355 Transport Type: 3 (TCP) 00:31:57.355 Address Family: 1 (IPv4) 00:31:57.355 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:57.355 Entry Flags: 00:31:57.355 Duplicate Returned Information: 0 00:31:57.355 Explicit Persistent Connection Support for Discovery: 0 00:31:57.355 Transport Requirements: 00:31:57.355 Secure Channel: Not Specified 00:31:57.355 Port ID: 1 (0x0001) 00:31:57.355 Controller ID: 65535 (0xffff) 00:31:57.355 Admin Max SQ Size: 32 00:31:57.355 Transport Service Identifier: 4420 00:31:57.355 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:57.355 Transport Address: 10.0.0.1 00:31:57.355 Discovery Log Entry 1 00:31:57.355 ---------------------- 00:31:57.355 Transport Type: 3 (TCP) 00:31:57.355 Address Family: 1 (IPv4) 00:31:57.355 Subsystem Type: 2 (NVM Subsystem) 00:31:57.355 Entry Flags: 00:31:57.355 Duplicate Returned Information: 0 00:31:57.355 Explicit Persistent Connection Support for Discovery: 0 00:31:57.355 Transport Requirements: 00:31:57.355 Secure Channel: Not Specified 00:31:57.355 Port ID: 1 (0x0001) 00:31:57.355 Controller ID: 65535 (0xffff) 00:31:57.355 Admin Max SQ Size: 32 00:31:57.355 Transport Service Identifier: 4420 00:31:57.355 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:57.355 Transport Address: 10.0.0.1 00:31:57.355 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:57.614 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.614 get_feature(0x01) failed 00:31:57.614 get_feature(0x02) failed 00:31:57.614 get_feature(0x04) failed 00:31:57.614 ===================================================== 00:31:57.614 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:57.614 ===================================================== 00:31:57.614 Controller Capabilities/Features 00:31:57.614 ================================ 00:31:57.614 Vendor ID: 0000 00:31:57.614 Subsystem Vendor ID: 0000 00:31:57.614 Serial Number: 7ec72885b0cd3e937975 00:31:57.614 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:57.614 Firmware Version: 6.7.0-68 00:31:57.614 Recommended Arb Burst: 6 00:31:57.614 IEEE OUI Identifier: 00 00 00 00:31:57.614 Multi-path I/O 00:31:57.614 May have multiple subsystem ports: Yes 00:31:57.614 May have multiple controllers: Yes 00:31:57.614 Associated with SR-IOV VF: No 00:31:57.614 Max Data Transfer Size: Unlimited 00:31:57.614 Max Number of Namespaces: 1024 00:31:57.614 Max Number of I/O Queues: 128 00:31:57.614 NVMe Specification Version (VS): 1.3 00:31:57.614 NVMe Specification Version (Identify): 1.3 00:31:57.614 Maximum Queue Entries: 1024 00:31:57.614 Contiguous Queues Required: No 00:31:57.614 Arbitration Mechanisms Supported 00:31:57.614 Weighted Round Robin: Not Supported 00:31:57.614 Vendor Specific: Not Supported 00:31:57.614 Reset Timeout: 7500 ms 00:31:57.614 Doorbell Stride: 4 bytes 00:31:57.614 NVM Subsystem Reset: Not Supported 00:31:57.614 Command Sets Supported 00:31:57.614 NVM Command Set: Supported 00:31:57.614 Boot Partition: Not Supported 00:31:57.614 Memory Page Size Minimum: 4096 bytes 00:31:57.614 Memory Page Size Maximum: 4096 bytes 00:31:57.614 Persistent Memory Region: Not Supported 00:31:57.614 Optional Asynchronous Events Supported 00:31:57.614 Namespace Attribute Notices: Supported 00:31:57.614 Firmware Activation Notices: Not Supported 00:31:57.614 ANA Change Notices: Supported 00:31:57.614 PLE Aggregate Log Change Notices: Not Supported 00:31:57.614 LBA Status Info Alert Notices: Not Supported 00:31:57.614 EGE Aggregate Log Change Notices: Not Supported 00:31:57.614 Normal NVM Subsystem Shutdown event: Not Supported 00:31:57.614 Zone Descriptor Change Notices: Not Supported 00:31:57.614 Discovery Log Change Notices: Not Supported 00:31:57.614 Controller Attributes 00:31:57.614 128-bit Host Identifier: Supported 00:31:57.614 Non-Operational Permissive Mode: Not Supported 00:31:57.614 NVM Sets: Not Supported 00:31:57.614 Read Recovery Levels: Not Supported 00:31:57.614 Endurance Groups: Not Supported 00:31:57.614 Predictable Latency Mode: Not Supported 00:31:57.614 Traffic Based Keep ALive: Supported 00:31:57.614 Namespace Granularity: Not Supported 00:31:57.614 SQ Associations: Not Supported 00:31:57.614 UUID List: Not Supported 00:31:57.614 Multi-Domain Subsystem: Not Supported 00:31:57.614 Fixed Capacity Management: Not Supported 00:31:57.614 Variable Capacity Management: Not Supported 00:31:57.614 Delete Endurance Group: Not Supported 00:31:57.614 Delete NVM Set: Not Supported 00:31:57.614 Extended LBA Formats Supported: Not Supported 00:31:57.614 Flexible Data Placement Supported: Not Supported 00:31:57.614 00:31:57.614 Controller Memory Buffer Support 00:31:57.614 ================================ 00:31:57.615 Supported: No 00:31:57.615 00:31:57.615 Persistent Memory Region Support 00:31:57.615 ================================ 00:31:57.615 Supported: No 00:31:57.615 00:31:57.615 Admin Command Set Attributes 00:31:57.615 ============================ 00:31:57.615 Security Send/Receive: Not Supported 00:31:57.615 Format NVM: Not Supported 00:31:57.615 Firmware Activate/Download: Not Supported 00:31:57.615 Namespace Management: Not Supported 00:31:57.615 Device Self-Test: Not Supported 00:31:57.615 Directives: Not Supported 00:31:57.615 NVMe-MI: Not Supported 00:31:57.615 Virtualization Management: Not Supported 00:31:57.615 Doorbell Buffer Config: Not Supported 00:31:57.615 Get LBA Status Capability: Not Supported 00:31:57.615 Command & Feature Lockdown Capability: Not Supported 00:31:57.615 Abort Command Limit: 4 00:31:57.615 Async Event Request Limit: 4 00:31:57.615 Number of Firmware Slots: N/A 00:31:57.615 Firmware Slot 1 Read-Only: N/A 00:31:57.615 Firmware Activation Without Reset: N/A 00:31:57.615 Multiple Update Detection Support: N/A 00:31:57.615 Firmware Update Granularity: No Information Provided 00:31:57.615 Per-Namespace SMART Log: Yes 00:31:57.615 Asymmetric Namespace Access Log Page: Supported 00:31:57.615 ANA Transition Time : 10 sec 00:31:57.615 00:31:57.615 Asymmetric Namespace Access Capabilities 00:31:57.615 ANA Optimized State : Supported 00:31:57.615 ANA Non-Optimized State : Supported 00:31:57.615 ANA Inaccessible State : Supported 00:31:57.615 ANA Persistent Loss State : Supported 00:31:57.615 ANA Change State : Supported 00:31:57.615 ANAGRPID is not changed : No 00:31:57.615 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:57.615 00:31:57.615 ANA Group Identifier Maximum : 128 00:31:57.615 Number of ANA Group Identifiers : 128 00:31:57.615 Max Number of Allowed Namespaces : 1024 00:31:57.615 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:57.615 Command Effects Log Page: Supported 00:31:57.615 Get Log Page Extended Data: Supported 00:31:57.615 Telemetry Log Pages: Not Supported 00:31:57.615 Persistent Event Log Pages: Not Supported 00:31:57.615 Supported Log Pages Log Page: May Support 00:31:57.615 Commands Supported & Effects Log Page: Not Supported 00:31:57.615 Feature Identifiers & Effects Log Page:May Support 00:31:57.615 NVMe-MI Commands & Effects Log Page: May Support 00:31:57.615 Data Area 4 for Telemetry Log: Not Supported 00:31:57.615 Error Log Page Entries Supported: 128 00:31:57.615 Keep Alive: Supported 00:31:57.615 Keep Alive Granularity: 1000 ms 00:31:57.615 00:31:57.615 NVM Command Set Attributes 00:31:57.615 ========================== 00:31:57.615 Submission Queue Entry Size 00:31:57.615 Max: 64 00:31:57.615 Min: 64 00:31:57.615 Completion Queue Entry Size 00:31:57.615 Max: 16 00:31:57.615 Min: 16 00:31:57.615 Number of Namespaces: 1024 00:31:57.615 Compare Command: Not Supported 00:31:57.615 Write Uncorrectable Command: Not Supported 00:31:57.615 Dataset Management Command: Supported 00:31:57.615 Write Zeroes Command: Supported 00:31:57.615 Set Features Save Field: Not Supported 00:31:57.615 Reservations: Not Supported 00:31:57.615 Timestamp: Not Supported 00:31:57.615 Copy: Not Supported 00:31:57.615 Volatile Write Cache: Present 00:31:57.615 Atomic Write Unit (Normal): 1 00:31:57.615 Atomic Write Unit (PFail): 1 00:31:57.615 Atomic Compare & Write Unit: 1 00:31:57.615 Fused Compare & Write: Not Supported 00:31:57.615 Scatter-Gather List 00:31:57.615 SGL Command Set: Supported 00:31:57.615 SGL Keyed: Not Supported 00:31:57.615 SGL Bit Bucket Descriptor: Not Supported 00:31:57.615 SGL Metadata Pointer: Not Supported 00:31:57.615 Oversized SGL: Not Supported 00:31:57.615 SGL Metadata Address: Not Supported 00:31:57.615 SGL Offset: Supported 00:31:57.615 Transport SGL Data Block: Not Supported 00:31:57.615 Replay Protected Memory Block: Not Supported 00:31:57.615 00:31:57.615 Firmware Slot Information 00:31:57.615 ========================= 00:31:57.615 Active slot: 0 00:31:57.615 00:31:57.615 Asymmetric Namespace Access 00:31:57.615 =========================== 00:31:57.615 Change Count : 0 00:31:57.615 Number of ANA Group Descriptors : 1 00:31:57.615 ANA Group Descriptor : 0 00:31:57.615 ANA Group ID : 1 00:31:57.615 Number of NSID Values : 1 00:31:57.615 Change Count : 0 00:31:57.615 ANA State : 1 00:31:57.615 Namespace Identifier : 1 00:31:57.615 00:31:57.615 Commands Supported and Effects 00:31:57.615 ============================== 00:31:57.615 Admin Commands 00:31:57.615 -------------- 00:31:57.615 Get Log Page (02h): Supported 00:31:57.615 Identify (06h): Supported 00:31:57.615 Abort (08h): Supported 00:31:57.615 Set Features (09h): Supported 00:31:57.615 Get Features (0Ah): Supported 00:31:57.615 Asynchronous Event Request (0Ch): Supported 00:31:57.615 Keep Alive (18h): Supported 00:31:57.615 I/O Commands 00:31:57.615 ------------ 00:31:57.615 Flush (00h): Supported 00:31:57.615 Write (01h): Supported LBA-Change 00:31:57.615 Read (02h): Supported 00:31:57.615 Write Zeroes (08h): Supported LBA-Change 00:31:57.615 Dataset Management (09h): Supported 00:31:57.615 00:31:57.615 Error Log 00:31:57.615 ========= 00:31:57.615 Entry: 0 00:31:57.615 Error Count: 0x3 00:31:57.615 Submission Queue Id: 0x0 00:31:57.615 Command Id: 0x5 00:31:57.615 Phase Bit: 0 00:31:57.615 Status Code: 0x2 00:31:57.615 Status Code Type: 0x0 00:31:57.615 Do Not Retry: 1 00:31:57.615 Error Location: 0x28 00:31:57.615 LBA: 0x0 00:31:57.615 Namespace: 0x0 00:31:57.615 Vendor Log Page: 0x0 00:31:57.615 ----------- 00:31:57.615 Entry: 1 00:31:57.615 Error Count: 0x2 00:31:57.615 Submission Queue Id: 0x0 00:31:57.615 Command Id: 0x5 00:31:57.615 Phase Bit: 0 00:31:57.615 Status Code: 0x2 00:31:57.615 Status Code Type: 0x0 00:31:57.615 Do Not Retry: 1 00:31:57.615 Error Location: 0x28 00:31:57.615 LBA: 0x0 00:31:57.615 Namespace: 0x0 00:31:57.615 Vendor Log Page: 0x0 00:31:57.615 ----------- 00:31:57.615 Entry: 2 00:31:57.615 Error Count: 0x1 00:31:57.615 Submission Queue Id: 0x0 00:31:57.615 Command Id: 0x4 00:31:57.615 Phase Bit: 0 00:31:57.615 Status Code: 0x2 00:31:57.615 Status Code Type: 0x0 00:31:57.615 Do Not Retry: 1 00:31:57.615 Error Location: 0x28 00:31:57.615 LBA: 0x0 00:31:57.615 Namespace: 0x0 00:31:57.615 Vendor Log Page: 0x0 00:31:57.615 00:31:57.615 Number of Queues 00:31:57.615 ================ 00:31:57.615 Number of I/O Submission Queues: 128 00:31:57.615 Number of I/O Completion Queues: 128 00:31:57.615 00:31:57.615 ZNS Specific Controller Data 00:31:57.615 ============================ 00:31:57.615 Zone Append Size Limit: 0 00:31:57.615 00:31:57.615 00:31:57.615 Active Namespaces 00:31:57.615 ================= 00:31:57.615 get_feature(0x05) failed 00:31:57.615 Namespace ID:1 00:31:57.615 Command Set Identifier: NVM (00h) 00:31:57.615 Deallocate: Supported 00:31:57.615 Deallocated/Unwritten Error: Not Supported 00:31:57.615 Deallocated Read Value: Unknown 00:31:57.615 Deallocate in Write Zeroes: Not Supported 00:31:57.615 Deallocated Guard Field: 0xFFFF 00:31:57.615 Flush: Supported 00:31:57.615 Reservation: Not Supported 00:31:57.615 Namespace Sharing Capabilities: Multiple Controllers 00:31:57.615 Size (in LBAs): 1953525168 (931GiB) 00:31:57.615 Capacity (in LBAs): 1953525168 (931GiB) 00:31:57.615 Utilization (in LBAs): 1953525168 (931GiB) 00:31:57.615 UUID: 0702f18a-97a6-4e7b-941b-43124bd5f058 00:31:57.615 Thin Provisioning: Not Supported 00:31:57.615 Per-NS Atomic Units: Yes 00:31:57.615 Atomic Boundary Size (Normal): 0 00:31:57.615 Atomic Boundary Size (PFail): 0 00:31:57.615 Atomic Boundary Offset: 0 00:31:57.615 NGUID/EUI64 Never Reused: No 00:31:57.615 ANA group ID: 1 00:31:57.615 Namespace Write Protected: No 00:31:57.615 Number of LBA Formats: 1 00:31:57.615 Current LBA Format: LBA Format #00 00:31:57.615 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:57.615 00:31:57.615 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:57.615 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:57.615 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:57.615 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:57.615 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:57.615 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:57.615 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:57.615 rmmod nvme_tcp 00:31:57.615 rmmod nvme_fabrics 00:31:57.615 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:57.616 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:57.616 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:57.616 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:57.616 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:57.616 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:57.616 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:57.616 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:57.616 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:57.616 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.616 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:57.616 19:02:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.589 19:02:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:59.589 19:02:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:59.589 19:02:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:59.589 19:02:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:59.589 19:02:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:59.589 19:02:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:59.589 19:02:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:59.589 19:02:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:59.589 19:02:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:59.589 19:02:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:59.589 19:02:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:00.523 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:00.523 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:00.523 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:00.523 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:00.523 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:00.523 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:00.523 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:00.523 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:00.523 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:00.523 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:00.523 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:00.784 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:00.784 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:00.784 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:00.784 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:00.784 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:01.719 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:01.719 00:32:01.719 real 0m8.883s 00:32:01.719 user 0m1.857s 00:32:01.719 sys 0m3.136s 00:32:01.719 19:02:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:01.719 19:02:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:01.719 ************************************ 00:32:01.719 END TEST nvmf_identify_kernel_target 00:32:01.719 ************************************ 00:32:01.719 19:02:11 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:01.719 19:02:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:01.719 19:02:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:01.719 19:02:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:01.719 ************************************ 00:32:01.719 START TEST nvmf_auth_host 00:32:01.719 ************************************ 00:32:01.719 19:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:01.977 * Looking for test storage... 00:32:01.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:01.977 19:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.977 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:01.977 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.977 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.977 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.977 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:01.978 19:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:04.503 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:04.503 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:04.503 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:04.503 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:04.503 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:04.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:04.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:32:04.504 00:32:04.504 --- 10.0.0.2 ping statistics --- 00:32:04.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.504 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:04.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:04.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:32:04.504 00:32:04.504 --- 10.0.0.1 ping statistics --- 00:32:04.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.504 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1524054 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1524054 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1524054 ']' 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6e72412e2ed09c6a1f2a00e5ffe1960c 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.pD2 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6e72412e2ed09c6a1f2a00e5ffe1960c 0 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6e72412e2ed09c6a1f2a00e5ffe1960c 0 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6e72412e2ed09c6a1f2a00e5ffe1960c 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.pD2 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.pD2 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.pD2 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=62e38a464bae248209c3ab94f2524715d14d8cdb31892394d93ad2914c950d17 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.e43 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 62e38a464bae248209c3ab94f2524715d14d8cdb31892394d93ad2914c950d17 3 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 62e38a464bae248209c3ab94f2524715d14d8cdb31892394d93ad2914c950d17 3 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=62e38a464bae248209c3ab94f2524715d14d8cdb31892394d93ad2914c950d17 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:04.504 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:04.762 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.e43 00:32:04.762 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.e43 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.e43 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=07d89190b9b9bfad514a8462b6705779c850c1bc93a8846b 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.udL 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 07d89190b9b9bfad514a8462b6705779c850c1bc93a8846b 0 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 07d89190b9b9bfad514a8462b6705779c850c1bc93a8846b 0 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=07d89190b9b9bfad514a8462b6705779c850c1bc93a8846b 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.udL 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.udL 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.udL 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=66479120e9ebd2cb5ce2b495a08f3ea4c5019b8ce9fb8df0 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.RYt 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 66479120e9ebd2cb5ce2b495a08f3ea4c5019b8ce9fb8df0 2 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 66479120e9ebd2cb5ce2b495a08f3ea4c5019b8ce9fb8df0 2 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=66479120e9ebd2cb5ce2b495a08f3ea4c5019b8ce9fb8df0 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.RYt 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.RYt 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.RYt 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e6a312b8b7438551d1ca230a9007e9bc 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.aT0 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e6a312b8b7438551d1ca230a9007e9bc 1 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e6a312b8b7438551d1ca230a9007e9bc 1 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e6a312b8b7438551d1ca230a9007e9bc 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.aT0 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.aT0 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.aT0 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6faa6d2168a883141efa2c892aee1d53 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.qWH 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6faa6d2168a883141efa2c892aee1d53 1 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6faa6d2168a883141efa2c892aee1d53 1 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6faa6d2168a883141efa2c892aee1d53 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:04.763 19:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.qWH 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.qWH 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.qWH 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d784de4def5cb5c9965b40322dd8f691bc6233f6dc2cd991 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.EoK 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d784de4def5cb5c9965b40322dd8f691bc6233f6dc2cd991 2 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d784de4def5cb5c9965b40322dd8f691bc6233f6dc2cd991 2 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d784de4def5cb5c9965b40322dd8f691bc6233f6dc2cd991 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.EoK 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.EoK 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.EoK 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ae207e00a2694089908059e07cda81f4 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.XOP 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ae207e00a2694089908059e07cda81f4 0 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ae207e00a2694089908059e07cda81f4 0 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ae207e00a2694089908059e07cda81f4 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:04.763 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.XOP 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.XOP 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.XOP 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=724e201526b0e58445163b71e210debbf2111dd14d62c2908bdd15afa7e065f5 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.fHS 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 724e201526b0e58445163b71e210debbf2111dd14d62c2908bdd15afa7e065f5 3 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 724e201526b0e58445163b71e210debbf2111dd14d62c2908bdd15afa7e065f5 3 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=724e201526b0e58445163b71e210debbf2111dd14d62c2908bdd15afa7e065f5 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.fHS 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.fHS 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.fHS 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1524054 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1524054 ']' 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:05.022 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pD2 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.e43 ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.e43 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.udL 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.RYt ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RYt 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.aT0 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.qWH ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qWH 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.EoK 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.XOP ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.XOP 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.fHS 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:05.281 19:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:06.655 Waiting for block devices as requested 00:32:06.655 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:06.655 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:06.655 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:06.655 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:06.655 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:06.655 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:06.914 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:06.914 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:06.914 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:06.914 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:07.172 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:07.172 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:07.172 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:07.172 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:07.430 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:07.430 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:07.430 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:07.689 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:07.689 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:07.689 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:07.689 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:07.689 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:07.689 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:07.689 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:07.689 19:02:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:07.689 19:02:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:07.948 No valid GPT data, bailing 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:07.948 00:32:07.948 Discovery Log Number of Records 2, Generation counter 2 00:32:07.948 =====Discovery Log Entry 0====== 00:32:07.948 trtype: tcp 00:32:07.948 adrfam: ipv4 00:32:07.948 subtype: current discovery subsystem 00:32:07.948 treq: not specified, sq flow control disable supported 00:32:07.948 portid: 1 00:32:07.948 trsvcid: 4420 00:32:07.948 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:07.948 traddr: 10.0.0.1 00:32:07.948 eflags: none 00:32:07.948 sectype: none 00:32:07.948 =====Discovery Log Entry 1====== 00:32:07.948 trtype: tcp 00:32:07.948 adrfam: ipv4 00:32:07.948 subtype: nvme subsystem 00:32:07.948 treq: not specified, sq flow control disable supported 00:32:07.948 portid: 1 00:32:07.948 trsvcid: 4420 00:32:07.948 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:07.948 traddr: 10.0.0.1 00:32:07.948 eflags: none 00:32:07.948 sectype: none 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:07.948 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.949 nvme0n1 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.949 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: ]] 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.208 nvme0n1 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:08.208 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.209 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.467 nvme0n1 00:32:08.467 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.467 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.467 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.467 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.467 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.467 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.467 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.467 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.467 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.467 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.467 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.467 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.467 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: ]] 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.468 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.727 nvme0n1 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: ]] 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.727 nvme0n1 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.727 19:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.727 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.985 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.985 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.985 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.985 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.985 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.985 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.985 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.986 nvme0n1 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: ]] 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.986 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.244 nvme0n1 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.244 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.245 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:09.245 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.245 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.503 nvme0n1 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: ]] 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.503 nvme0n1 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.503 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: ]] 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.762 19:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.762 nvme0n1 00:32:09.763 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.763 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.763 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.763 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.763 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.763 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.021 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.022 nvme0n1 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: ]] 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.022 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.280 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.280 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.280 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.280 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.280 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.280 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.280 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.280 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.280 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:10.280 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.280 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.280 nvme0n1 00:32:10.280 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.280 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.280 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.281 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.281 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.540 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.852 nvme0n1 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:10.852 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: ]] 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.853 19:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.113 nvme0n1 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: ]] 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.113 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.114 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.114 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:11.114 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.114 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.373 nvme0n1 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.373 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.632 nvme0n1 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: ]] 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.632 19:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.633 19:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:11.633 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.633 19:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.199 nvme0n1 00:32:12.199 19:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.199 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.199 19:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.199 19:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.199 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.199 19:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.199 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.199 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.199 19:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.199 19:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.457 19:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.458 19:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.458 19:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.458 19:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.458 19:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:12.458 19:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.458 19:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.024 nvme0n1 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: ]] 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.024 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.588 nvme0n1 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: ]] 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.588 19:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.151 nvme0n1 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.151 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.715 nvme0n1 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: ]] 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.715 19:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.647 nvme0n1 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.647 19:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.579 nvme0n1 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: ]] 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.579 19:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.512 nvme0n1 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: ]] 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.512 19:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.446 nvme0n1 00:32:18.446 19:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.446 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.446 19:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.446 19:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.446 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.446 19:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.446 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.446 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.446 19:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.446 19:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.446 19:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.446 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.447 19:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.382 nvme0n1 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.382 nvme0n1 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.382 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.383 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:19.383 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.383 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.641 nvme0n1 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: ]] 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.641 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.900 nvme0n1 00:32:19.901 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.901 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.901 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.901 19:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.901 19:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: ]] 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.901 nvme0n1 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.901 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.160 nvme0n1 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: ]] 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.160 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.420 nvme0n1 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.420 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.683 nvme0n1 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: ]] 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.683 19:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.684 19:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:20.684 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.684 19:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.942 nvme0n1 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: ]] 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.942 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.200 nvme0n1 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.200 nvme0n1 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.200 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: ]] 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.458 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.716 nvme0n1 00:32:21.716 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.716 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.716 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.716 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.716 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.716 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.717 19:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.975 nvme0n1 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: ]] 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.975 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.233 nvme0n1 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: ]] 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.233 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.491 nvme0n1 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:22.491 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.749 19:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.007 nvme0n1 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: ]] 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.007 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.575 nvme0n1 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.575 19:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.833 nvme0n1 00:32:23.833 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.833 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.833 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.833 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.833 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: ]] 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.090 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.091 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.091 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.091 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.091 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.091 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.091 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.091 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.091 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.091 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:24.091 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.091 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.655 nvme0n1 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: ]] 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.655 19:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.222 nvme0n1 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.222 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.881 nvme0n1 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: ]] 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.881 19:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.813 nvme0n1 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.813 19:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.747 nvme0n1 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: ]] 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.747 19:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.682 nvme0n1 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: ]] 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.682 19:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.616 nvme0n1 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.616 19:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.553 nvme0n1 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: ]] 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.553 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.554 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.812 nvme0n1 00:32:30.812 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.813 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.813 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.813 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.813 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.813 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.813 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.813 19:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.813 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.813 19:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.813 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.072 nvme0n1 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: ]] 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.072 nvme0n1 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.072 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: ]] 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.331 nvme0n1 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.331 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.591 nvme0n1 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: ]] 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.591 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.851 nvme0n1 00:32:31.851 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.851 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.851 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.851 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.851 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.851 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.851 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.851 19:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.851 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.851 19:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.851 nvme0n1 00:32:31.851 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: ]] 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.110 nvme0n1 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.110 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: ]] 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.369 nvme0n1 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.369 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.370 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.629 nvme0n1 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: ]] 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.629 19:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.886 nvme0n1 00:32:32.886 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.886 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.886 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.886 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.886 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.886 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.143 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.401 nvme0n1 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: ]] 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.401 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.659 nvme0n1 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: ]] 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.659 19:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.918 nvme0n1 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.918 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.176 nvme0n1 00:32:34.176 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.176 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.176 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.176 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.176 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.176 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.176 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.176 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.176 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.176 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: ]] 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.435 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.693 nvme0n1 00:32:34.693 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.693 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.693 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.693 19:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.693 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.693 19:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.693 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.693 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.693 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.693 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.951 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.951 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.951 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:34.951 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.951 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:34.951 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:34.951 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:34.951 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:34.951 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.952 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.209 nvme0n1 00:32:35.209 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.209 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.209 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.209 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.209 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.209 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: ]] 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.466 19:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.032 nvme0n1 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: ]] 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.032 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.597 nvme0n1 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.597 19:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.162 nvme0n1 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3MjQxMmUyZWQwOWM2YTFmMmEwMGU1ZmZlMTk2MGNnUREq: 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: ]] 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMzhhNDY0YmFlMjQ4MjA5YzNhYjk0ZjI1MjQ3MTVkMTRkOGNkYjMxODkyMzk0ZDkzYWQyOTE0Yzk1MGQxN6V1GtM=: 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.162 19:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.094 nvme0n1 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.094 19:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.027 nvme0n1 00:32:39.027 19:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.027 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhMzEyYjhiNzQzODU1MWQxY2EyMzBhOTAwN2U5YmMAuP8b: 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: ]] 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmZhYTZkMjE2OGE4ODMxNDFlZmEyYzg5MmFlZTFkNTOiZZN0: 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.028 19:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.961 nvme0n1 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDc4NGRlNGRlZjVjYjVjOTk2NWI0MDMyMmRkOGY2OTFiYzYyMzNmNmRjMmNkOTkxSYBhEQ==: 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: ]] 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWUyMDdlMDBhMjY5NDA4OTkwODA1OWUwN2NkYTgxZjQa5ZBQ: 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.961 19:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.896 nvme0n1 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI0ZTIwMTUyNmIwZTU4NDQ1MTYzYjcxZTIxMGRlYmJmMjExMWRkMTRkNjJjMjkwOGJkZDE1YWZhN2UwNjVmNV8QSHA=: 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.896 19:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.832 nvme0n1 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdkODkxOTBiOWI5YmZhZDUxNGE4NDYyYjY3MDU3NzljODUwYzFiYzkzYTg4NDZibWWWDQ==: 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: ]] 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjY0NzkxMjBlOWViZDJjYjVjZTJiNDk1YTA4ZjNlYTRjNTAxOWI4Y2U5ZmI4ZGYwaAwUiQ==: 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:41.832 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:41.833 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:41.833 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:41.833 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:41.833 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:41.833 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:41.833 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:41.833 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.833 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.090 request: 00:32:42.090 { 00:32:42.090 "name": "nvme0", 00:32:42.090 "trtype": "tcp", 00:32:42.090 "traddr": "10.0.0.1", 00:32:42.090 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:42.090 "adrfam": "ipv4", 00:32:42.090 "trsvcid": "4420", 00:32:42.090 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:42.090 "method": "bdev_nvme_attach_controller", 00:32:42.090 "req_id": 1 00:32:42.090 } 00:32:42.090 Got JSON-RPC error response 00:32:42.090 response: 00:32:42.090 { 00:32:42.090 "code": -5, 00:32:42.090 "message": "Input/output error" 00:32:42.090 } 00:32:42.090 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:42.090 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:42.090 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:42.090 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.091 request: 00:32:42.091 { 00:32:42.091 "name": "nvme0", 00:32:42.091 "trtype": "tcp", 00:32:42.091 "traddr": "10.0.0.1", 00:32:42.091 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:42.091 "adrfam": "ipv4", 00:32:42.091 "trsvcid": "4420", 00:32:42.091 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:42.091 "dhchap_key": "key2", 00:32:42.091 "method": "bdev_nvme_attach_controller", 00:32:42.091 "req_id": 1 00:32:42.091 } 00:32:42.091 Got JSON-RPC error response 00:32:42.091 response: 00:32:42.091 { 00:32:42.091 "code": -5, 00:32:42.091 "message": "Input/output error" 00:32:42.091 } 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.091 request: 00:32:42.091 { 00:32:42.091 "name": "nvme0", 00:32:42.091 "trtype": "tcp", 00:32:42.091 "traddr": "10.0.0.1", 00:32:42.091 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:42.091 "adrfam": "ipv4", 00:32:42.091 "trsvcid": "4420", 00:32:42.091 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:42.091 "dhchap_key": "key1", 00:32:42.091 "dhchap_ctrlr_key": "ckey2", 00:32:42.091 "method": "bdev_nvme_attach_controller", 00:32:42.091 "req_id": 1 00:32:42.091 } 00:32:42.091 Got JSON-RPC error response 00:32:42.091 response: 00:32:42.091 { 00:32:42.091 "code": -5, 00:32:42.091 "message": "Input/output error" 00:32:42.091 } 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:42.091 rmmod nvme_tcp 00:32:42.091 rmmod nvme_fabrics 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1524054 ']' 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1524054 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 1524054 ']' 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 1524054 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:42.091 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1524054 00:32:42.349 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:42.349 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:42.349 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1524054' 00:32:42.349 killing process with pid 1524054 00:32:42.349 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 1524054 00:32:42.349 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 1524054 00:32:42.349 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:42.349 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:42.349 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:42.349 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:42.349 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:42.349 19:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.349 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:42.349 19:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.871 19:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:44.871 19:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:44.871 19:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:44.871 19:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:44.871 19:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:44.871 19:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:44.871 19:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:44.871 19:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:44.871 19:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:44.871 19:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:44.871 19:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:44.871 19:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:44.871 19:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:45.802 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:45.802 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:45.802 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:45.802 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:45.802 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:45.802 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:45.802 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:45.802 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:45.802 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:45.802 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:45.802 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:45.802 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:45.802 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:45.802 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:45.802 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:45.802 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:46.737 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:46.737 19:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.pD2 /tmp/spdk.key-null.udL /tmp/spdk.key-sha256.aT0 /tmp/spdk.key-sha384.EoK /tmp/spdk.key-sha512.fHS /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:46.737 19:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:47.719 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:47.719 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:47.719 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:47.719 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:47.719 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:47.719 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:47.719 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:47.719 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:47.719 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:47.719 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:47.719 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:47.719 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:47.719 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:47.719 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:47.719 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:47.719 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:47.719 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:47.977 00:32:47.977 real 0m46.109s 00:32:47.977 user 0m43.915s 00:32:47.977 sys 0m5.647s 00:32:47.977 19:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:47.977 19:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.977 ************************************ 00:32:47.977 END TEST nvmf_auth_host 00:32:47.977 ************************************ 00:32:47.977 19:02:58 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:32:47.977 19:02:58 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:47.977 19:02:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:47.977 19:02:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:47.977 19:02:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.977 ************************************ 00:32:47.977 START TEST nvmf_digest 00:32:47.977 ************************************ 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:47.977 * Looking for test storage... 00:32:47.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:47.977 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:47.978 19:02:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:49.877 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:49.877 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:49.877 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:49.878 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:49.878 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:49.878 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:50.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:32:50.136 00:32:50.136 --- 10.0.0.2 ping statistics --- 00:32:50.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.136 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:32:50.136 00:32:50.136 --- 10.0.0.1 ping statistics --- 00:32:50.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.136 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:50.136 ************************************ 00:32:50.136 START TEST nvmf_digest_clean 00:32:50.136 ************************************ 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1533104 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1533104 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1533104 ']' 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:50.136 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:50.136 [2024-07-20 19:03:00.414172] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:50.136 [2024-07-20 19:03:00.414249] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.136 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.394 [2024-07-20 19:03:00.482757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.394 [2024-07-20 19:03:00.575805] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.394 [2024-07-20 19:03:00.575862] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.394 [2024-07-20 19:03:00.575887] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.394 [2024-07-20 19:03:00.575902] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.394 [2024-07-20 19:03:00.575915] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.394 [2024-07-20 19:03:00.575950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.394 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:50.394 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:32:50.394 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:50.394 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:50.394 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:50.394 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:50.394 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:50.394 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:50.394 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:50.394 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.394 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:50.652 null0 00:32:50.652 [2024-07-20 19:03:00.756482] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:50.652 [2024-07-20 19:03:00.780710] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:50.652 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.652 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:50.652 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:50.652 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:50.652 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:50.653 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:50.653 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:50.653 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:50.653 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1533161 00:32:50.653 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1533161 /var/tmp/bperf.sock 00:32:50.653 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1533161 ']' 00:32:50.653 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:50.653 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:50.653 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:50.653 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:50.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:50.653 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:50.653 19:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:50.653 [2024-07-20 19:03:00.828115] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:50.653 [2024-07-20 19:03:00.828187] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533161 ] 00:32:50.653 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.653 [2024-07-20 19:03:00.889332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.910 [2024-07-20 19:03:00.978860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.910 19:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:50.910 19:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:32:50.910 19:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:50.910 19:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:50.910 19:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:51.168 19:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:51.168 19:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:51.425 nvme0n1 00:32:51.425 19:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:51.425 19:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:51.683 Running I/O for 2 seconds... 00:32:53.579 00:32:53.579 Latency(us) 00:32:53.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.579 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:53.579 nvme0n1 : 2.00 19794.64 77.32 0.00 0.00 6456.88 3106.89 13107.20 00:32:53.579 =================================================================================================================== 00:32:53.579 Total : 19794.64 77.32 0.00 0.00 6456.88 3106.89 13107.20 00:32:53.579 0 00:32:53.579 19:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:53.579 19:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:53.579 19:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:53.579 19:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:53.579 19:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:53.579 | select(.opcode=="crc32c") 00:32:53.579 | "\(.module_name) \(.executed)"' 00:32:53.837 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:53.837 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:53.837 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:53.837 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:53.837 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1533161 00:32:53.837 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1533161 ']' 00:32:53.837 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1533161 00:32:53.837 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:32:53.837 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:53.837 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1533161 00:32:53.837 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:53.837 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:53.837 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1533161' 00:32:53.837 killing process with pid 1533161 00:32:53.837 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1533161 00:32:53.837 Received shutdown signal, test time was about 2.000000 seconds 00:32:53.837 00:32:53.837 Latency(us) 00:32:53.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.837 =================================================================================================================== 00:32:53.837 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:53.837 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1533161 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1533630 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1533630 /var/tmp/bperf.sock 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1533630 ']' 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:54.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:54.094 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:54.094 [2024-07-20 19:03:04.375921] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:54.094 [2024-07-20 19:03:04.375998] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533630 ] 00:32:54.094 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:54.094 Zero copy mechanism will not be used. 00:32:54.094 EAL: No free 2048 kB hugepages reported on node 1 00:32:54.352 [2024-07-20 19:03:04.436247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.352 [2024-07-20 19:03:04.523176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.352 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:54.352 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:32:54.352 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:54.352 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:54.352 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:54.947 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:54.947 19:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:55.204 nvme0n1 00:32:55.204 19:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:55.204 19:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:55.204 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:55.204 Zero copy mechanism will not be used. 00:32:55.204 Running I/O for 2 seconds... 00:32:57.105 00:32:57.105 Latency(us) 00:32:57.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.105 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:57.105 nvme0n1 : 2.01 1720.83 215.10 0.00 0.00 9294.63 9029.40 14660.65 00:32:57.105 =================================================================================================================== 00:32:57.105 Total : 1720.83 215.10 0.00 0.00 9294.63 9029.40 14660.65 00:32:57.105 0 00:32:57.105 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:57.105 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:57.105 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:57.105 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:57.105 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:57.105 | select(.opcode=="crc32c") 00:32:57.105 | "\(.module_name) \(.executed)"' 00:32:57.362 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:57.362 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:57.362 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:57.362 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:57.362 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1533630 00:32:57.362 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1533630 ']' 00:32:57.362 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1533630 00:32:57.362 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:32:57.362 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:57.362 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1533630 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1533630' 00:32:57.620 killing process with pid 1533630 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1533630 00:32:57.620 Received shutdown signal, test time was about 2.000000 seconds 00:32:57.620 00:32:57.620 Latency(us) 00:32:57.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.620 =================================================================================================================== 00:32:57.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1533630 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1534285 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1534285 /var/tmp/bperf.sock 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1534285 ']' 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:57.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:57.620 19:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:57.878 [2024-07-20 19:03:07.959575] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:57.878 [2024-07-20 19:03:07.959665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1534285 ] 00:32:57.878 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.878 [2024-07-20 19:03:08.025478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.878 [2024-07-20 19:03:08.114885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.878 19:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:57.878 19:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:32:57.878 19:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:57.878 19:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:57.878 19:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:58.443 19:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:58.443 19:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:58.701 nvme0n1 00:32:58.701 19:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:58.701 19:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:58.961 Running I/O for 2 seconds... 00:33:00.883 00:33:00.883 Latency(us) 00:33:00.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.883 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:00.883 nvme0n1 : 2.00 20375.10 79.59 0.00 0.00 6272.29 3021.94 13592.65 00:33:00.883 =================================================================================================================== 00:33:00.883 Total : 20375.10 79.59 0.00 0.00 6272.29 3021.94 13592.65 00:33:00.883 0 00:33:00.883 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:00.883 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:00.883 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:00.883 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:00.883 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:00.883 | select(.opcode=="crc32c") 00:33:00.883 | "\(.module_name) \(.executed)"' 00:33:01.146 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:01.146 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:01.146 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:01.146 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:01.146 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1534285 00:33:01.146 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1534285 ']' 00:33:01.146 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1534285 00:33:01.146 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:01.146 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:01.146 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1534285 00:33:01.146 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:01.146 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:01.146 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1534285' 00:33:01.146 killing process with pid 1534285 00:33:01.146 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1534285 00:33:01.146 Received shutdown signal, test time was about 2.000000 seconds 00:33:01.146 00:33:01.146 Latency(us) 00:33:01.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.146 =================================================================================================================== 00:33:01.146 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:01.146 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1534285 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1534973 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1534973 /var/tmp/bperf.sock 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1534973 ']' 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:01.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:01.404 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:01.404 [2024-07-20 19:03:11.673241] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:01.404 [2024-07-20 19:03:11.673322] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1534973 ] 00:33:01.404 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:01.404 Zero copy mechanism will not be used. 00:33:01.404 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.662 [2024-07-20 19:03:11.739153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.662 [2024-07-20 19:03:11.829379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.662 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:01.662 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:01.662 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:01.662 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:01.662 19:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:01.920 19:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:01.920 19:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:02.485 nvme0n1 00:33:02.485 19:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:02.485 19:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:02.485 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:02.485 Zero copy mechanism will not be used. 00:33:02.485 Running I/O for 2 seconds... 00:33:04.385 00:33:04.385 Latency(us) 00:33:04.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.385 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:04.385 nvme0n1 : 2.02 938.74 117.34 0.00 0.00 16965.47 10874.12 23495.87 00:33:04.385 =================================================================================================================== 00:33:04.385 Total : 938.74 117.34 0.00 0.00 16965.47 10874.12 23495.87 00:33:04.385 0 00:33:04.385 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:04.385 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:04.385 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:04.385 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:04.385 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:04.385 | select(.opcode=="crc32c") 00:33:04.385 | "\(.module_name) \(.executed)"' 00:33:04.643 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:04.643 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:04.643 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:04.643 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:04.643 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1534973 00:33:04.643 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1534973 ']' 00:33:04.643 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1534973 00:33:04.643 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:04.643 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:04.643 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1534973 00:33:04.643 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:04.643 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:04.643 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1534973' 00:33:04.643 killing process with pid 1534973 00:33:04.643 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1534973 00:33:04.643 Received shutdown signal, test time was about 2.000000 seconds 00:33:04.643 00:33:04.643 Latency(us) 00:33:04.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.643 =================================================================================================================== 00:33:04.643 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:04.643 19:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1534973 00:33:04.901 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1533104 00:33:04.901 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1533104 ']' 00:33:04.901 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1533104 00:33:04.901 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:04.901 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:04.901 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1533104 00:33:04.901 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:04.901 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:04.901 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1533104' 00:33:04.901 killing process with pid 1533104 00:33:04.901 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1533104 00:33:04.901 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1533104 00:33:05.159 00:33:05.159 real 0m15.045s 00:33:05.159 user 0m30.610s 00:33:05.159 sys 0m3.596s 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:05.159 ************************************ 00:33:05.159 END TEST nvmf_digest_clean 00:33:05.159 ************************************ 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:05.159 ************************************ 00:33:05.159 START TEST nvmf_digest_error 00:33:05.159 ************************************ 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1535504 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1535504 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1535504 ']' 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:05.159 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:05.417 [2024-07-20 19:03:15.506323] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:05.417 [2024-07-20 19:03:15.506408] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.417 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.417 [2024-07-20 19:03:15.574062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.417 [2024-07-20 19:03:15.664357] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.417 [2024-07-20 19:03:15.664417] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.417 [2024-07-20 19:03:15.664434] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:05.417 [2024-07-20 19:03:15.664447] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:05.417 [2024-07-20 19:03:15.664460] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.417 [2024-07-20 19:03:15.664490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.417 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:05.417 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:05.417 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:05.418 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:05.418 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:05.676 [2024-07-20 19:03:15.769186] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:05.676 null0 00:33:05.676 [2024-07-20 19:03:15.887894] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:05.676 [2024-07-20 19:03:15.912160] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1535524 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1535524 /var/tmp/bperf.sock 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1535524 ']' 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:05.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:05.676 19:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:05.676 [2024-07-20 19:03:15.957818] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:05.676 [2024-07-20 19:03:15.957905] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535524 ] 00:33:05.676 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.935 [2024-07-20 19:03:16.019910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.935 [2024-07-20 19:03:16.110722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.935 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:05.935 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:05.935 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:05.935 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:06.192 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:06.192 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.192 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:06.192 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.192 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:06.192 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:06.757 nvme0n1 00:33:06.757 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:06.757 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.757 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:06.757 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.757 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:06.757 19:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:06.757 Running I/O for 2 seconds... 00:33:07.014 [2024-07-20 19:03:17.093201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.014 [2024-07-20 19:03:17.093249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.014 [2024-07-20 19:03:17.093268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.014 [2024-07-20 19:03:17.106187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.014 [2024-07-20 19:03:17.106221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.014 [2024-07-20 19:03:17.106239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.014 [2024-07-20 19:03:17.119216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.014 [2024-07-20 19:03:17.119263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.014 [2024-07-20 19:03:17.119280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.014 [2024-07-20 19:03:17.132123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.014 [2024-07-20 19:03:17.132155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.014 [2024-07-20 19:03:17.132174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.014 [2024-07-20 19:03:17.144748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.014 [2024-07-20 19:03:17.144802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.014 [2024-07-20 19:03:17.144821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.014 [2024-07-20 19:03:17.158404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.014 [2024-07-20 19:03:17.158434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.014 [2024-07-20 19:03:17.158451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.014 [2024-07-20 19:03:17.171015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.014 [2024-07-20 19:03:17.171046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.014 [2024-07-20 19:03:17.171063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.014 [2024-07-20 19:03:17.183230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.014 [2024-07-20 19:03:17.183259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.014 [2024-07-20 19:03:17.183276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.014 [2024-07-20 19:03:17.196162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.014 [2024-07-20 19:03:17.196192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.015 [2024-07-20 19:03:17.196224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.015 [2024-07-20 19:03:17.208956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.015 [2024-07-20 19:03:17.208986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.015 [2024-07-20 19:03:17.209004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.015 [2024-07-20 19:03:17.220987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.015 [2024-07-20 19:03:17.221018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.015 [2024-07-20 19:03:17.221035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.015 [2024-07-20 19:03:17.234660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.015 [2024-07-20 19:03:17.234688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.015 [2024-07-20 19:03:17.234704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.015 [2024-07-20 19:03:17.246867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.015 [2024-07-20 19:03:17.246908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.015 [2024-07-20 19:03:17.246926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.015 [2024-07-20 19:03:17.259580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.015 [2024-07-20 19:03:17.259617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.015 [2024-07-20 19:03:17.259634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.015 [2024-07-20 19:03:17.273260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.015 [2024-07-20 19:03:17.273307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.015 [2024-07-20 19:03:17.273324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.015 [2024-07-20 19:03:17.285225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.015 [2024-07-20 19:03:17.285270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.015 [2024-07-20 19:03:17.285287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.015 [2024-07-20 19:03:17.298849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.015 [2024-07-20 19:03:17.298881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.015 [2024-07-20 19:03:17.298898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.015 [2024-07-20 19:03:17.311676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.015 [2024-07-20 19:03:17.311722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.015 [2024-07-20 19:03:17.311740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.015 [2024-07-20 19:03:17.324863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.015 [2024-07-20 19:03:17.324893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.015 [2024-07-20 19:03:17.324911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.015 [2024-07-20 19:03:17.336878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.015 [2024-07-20 19:03:17.336909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.015 [2024-07-20 19:03:17.336927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.273 [2024-07-20 19:03:17.349879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.273 [2024-07-20 19:03:17.349910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.273 [2024-07-20 19:03:17.349928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.273 [2024-07-20 19:03:17.362621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.273 [2024-07-20 19:03:17.362651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.273 [2024-07-20 19:03:17.362668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.273 [2024-07-20 19:03:17.375939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.273 [2024-07-20 19:03:17.375970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.273 [2024-07-20 19:03:17.375987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.273 [2024-07-20 19:03:17.389053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.273 [2024-07-20 19:03:17.389084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.273 [2024-07-20 19:03:17.389118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.273 [2024-07-20 19:03:17.400777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.273 [2024-07-20 19:03:17.400831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.273 [2024-07-20 19:03:17.400850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.273 [2024-07-20 19:03:17.414111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.273 [2024-07-20 19:03:17.414157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.273 [2024-07-20 19:03:17.414175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.273 [2024-07-20 19:03:17.426429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.273 [2024-07-20 19:03:17.426459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.273 [2024-07-20 19:03:17.426475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.273 [2024-07-20 19:03:17.439087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.273 [2024-07-20 19:03:17.439125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.273 [2024-07-20 19:03:17.439142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.273 [2024-07-20 19:03:17.453023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.273 [2024-07-20 19:03:17.453053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.273 [2024-07-20 19:03:17.453069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.273 [2024-07-20 19:03:17.464917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.273 [2024-07-20 19:03:17.464957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.273 [2024-07-20 19:03:17.464975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.273 [2024-07-20 19:03:17.478862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.274 [2024-07-20 19:03:17.478892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.274 [2024-07-20 19:03:17.478927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.274 [2024-07-20 19:03:17.491104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.274 [2024-07-20 19:03:17.491135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.274 [2024-07-20 19:03:17.491152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.274 [2024-07-20 19:03:17.503533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.274 [2024-07-20 19:03:17.503562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.274 [2024-07-20 19:03:17.503579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.274 [2024-07-20 19:03:17.516911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.274 [2024-07-20 19:03:17.516941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.274 [2024-07-20 19:03:17.516958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.274 [2024-07-20 19:03:17.528330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.274 [2024-07-20 19:03:17.528360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.274 [2024-07-20 19:03:17.528377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.274 [2024-07-20 19:03:17.541524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.274 [2024-07-20 19:03:17.541554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.274 [2024-07-20 19:03:17.541587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.274 [2024-07-20 19:03:17.554401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.274 [2024-07-20 19:03:17.554430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.274 [2024-07-20 19:03:17.554447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.274 [2024-07-20 19:03:17.566706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.274 [2024-07-20 19:03:17.566736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.274 [2024-07-20 19:03:17.566754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.274 [2024-07-20 19:03:17.578664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.274 [2024-07-20 19:03:17.578694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.274 [2024-07-20 19:03:17.578710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.274 [2024-07-20 19:03:17.592945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.274 [2024-07-20 19:03:17.592982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.274 [2024-07-20 19:03:17.592999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.532 [2024-07-20 19:03:17.604951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.532 [2024-07-20 19:03:17.604981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.532 [2024-07-20 19:03:17.604997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.532 [2024-07-20 19:03:17.617408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.532 [2024-07-20 19:03:17.617437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.532 [2024-07-20 19:03:17.617455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.532 [2024-07-20 19:03:17.630253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.532 [2024-07-20 19:03:17.630282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.532 [2024-07-20 19:03:17.630315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.532 [2024-07-20 19:03:17.643004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.532 [2024-07-20 19:03:17.643034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.643052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.656103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.656132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.656165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.668234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.668264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.668280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.680790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.680826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.680844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.694123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.694153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.694171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.705509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.705539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.705557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.720093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.720123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.720140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.732331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.732361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.732378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.744212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.744240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.744270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.757420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.757449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.757482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.770686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.770715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.770747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.782996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.783025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.783042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.795363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.795393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.795426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.808158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.808186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.808227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.820468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.820497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.820530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.832952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.832982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.832999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.533 [2024-07-20 19:03:17.845332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.533 [2024-07-20 19:03:17.845361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.533 [2024-07-20 19:03:17.845394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:17.858268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:17.858298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:17.858315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:17.870832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:17.870861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:17.870878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:17.883573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:17.883602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:17.883635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:17.895767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:17.895804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:17.895823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:17.908110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:17.908139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:17.908155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:17.921974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:17.922012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:17.922030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:17.934430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:17.934459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:17.934492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:17.946491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:17.946519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:17.946552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:17.958757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:17.958787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:17.958812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:17.971190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:17.971220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:17.971237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:17.984895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:17.984924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:17.984942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:17.996600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:17.996644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:17.996661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:18.009672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:18.009702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:18.009719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:18.022398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:18.022427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:18.022467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:18.034432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:18.034462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:18.034479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:18.047517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:18.047546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:18.047578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:18.059353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:18.059381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:18.059398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:18.072604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:18.072632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:18.072665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:18.085618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:18.085647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:18.085664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:18.098349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:18.098378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:18.098396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.792 [2024-07-20 19:03:18.110935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:07.792 [2024-07-20 19:03:18.110964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.792 [2024-07-20 19:03:18.110982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.123351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.123381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.123398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.135698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.135734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.135752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.148268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.148297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.148315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.160835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.160864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.160881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.173634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.173663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.173681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.186771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.186809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.186827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.199360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.199391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.199408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.211363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.211393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.211410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.224376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.224406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.224423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.237441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.237470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.237503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.249913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.249942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.249959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.262970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.263000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.263017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.276672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.276704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.276723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.289559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.289590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.289608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.302911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.302939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.302957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.316504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.316534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.316552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.327982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.328011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.328028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.341507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.341538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.341556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.353863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.353892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.353930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.051 [2024-07-20 19:03:18.367460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.051 [2024-07-20 19:03:18.367491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.051 [2024-07-20 19:03:18.367509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.380299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.380342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.380358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.393978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.394007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.394025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.407020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.407048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.407065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.420391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.420420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.420452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.432903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.432934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.432951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.445247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.445278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.445310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.457035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.457065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.457099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.471193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.471243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.471261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.483695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.483724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.483756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.496063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.496093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.496111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.509021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.509062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.509079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.521877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.521907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.521939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.533354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.533383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.533400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.546661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.546698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.546716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.561265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.561297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.561315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.572724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.572756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.572776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.586537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.586570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.586589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.601466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.601500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.601519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.613991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.614019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.614051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.310 [2024-07-20 19:03:18.627985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.310 [2024-07-20 19:03:18.628014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.310 [2024-07-20 19:03:18.628032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.568 [2024-07-20 19:03:18.642455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.568 [2024-07-20 19:03:18.642488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.568 [2024-07-20 19:03:18.642507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.568 [2024-07-20 19:03:18.656257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.568 [2024-07-20 19:03:18.656290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.568 [2024-07-20 19:03:18.656309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.568 [2024-07-20 19:03:18.669200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.568 [2024-07-20 19:03:18.669233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.568 [2024-07-20 19:03:18.669252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.568 [2024-07-20 19:03:18.683538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.568 [2024-07-20 19:03:18.683571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.568 [2024-07-20 19:03:18.683590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.568 [2024-07-20 19:03:18.698185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.568 [2024-07-20 19:03:18.698217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.568 [2024-07-20 19:03:18.698242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.568 [2024-07-20 19:03:18.710941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.568 [2024-07-20 19:03:18.710971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.569 [2024-07-20 19:03:18.710989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.569 [2024-07-20 19:03:18.724630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.569 [2024-07-20 19:03:18.724661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.569 [2024-07-20 19:03:18.724679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.569 [2024-07-20 19:03:18.738392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.569 [2024-07-20 19:03:18.738424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.569 [2024-07-20 19:03:18.738442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.569 [2024-07-20 19:03:18.751105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.569 [2024-07-20 19:03:18.751150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.569 [2024-07-20 19:03:18.751168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.569 [2024-07-20 19:03:18.763737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.569 [2024-07-20 19:03:18.763768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.569 [2024-07-20 19:03:18.763786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.569 [2024-07-20 19:03:18.777806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.569 [2024-07-20 19:03:18.777851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.569 [2024-07-20 19:03:18.777868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.569 [2024-07-20 19:03:18.789784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.569 [2024-07-20 19:03:18.789838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.569 [2024-07-20 19:03:18.789855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.569 [2024-07-20 19:03:18.801607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.569 [2024-07-20 19:03:18.801637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.569 [2024-07-20 19:03:18.801654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.569 [2024-07-20 19:03:18.815042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.569 [2024-07-20 19:03:18.815078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.569 [2024-07-20 19:03:18.815096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.569 [2024-07-20 19:03:18.827987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.569 [2024-07-20 19:03:18.828017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.569 [2024-07-20 19:03:18.828035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.569 [2024-07-20 19:03:18.839901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.569 [2024-07-20 19:03:18.839931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.569 [2024-07-20 19:03:18.839948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.569 [2024-07-20 19:03:18.852510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.569 [2024-07-20 19:03:18.852543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.569 [2024-07-20 19:03:18.852562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.569 [2024-07-20 19:03:18.866339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.569 [2024-07-20 19:03:18.866369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.569 [2024-07-20 19:03:18.866387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.569 [2024-07-20 19:03:18.879851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.569 [2024-07-20 19:03:18.879880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.569 [2024-07-20 19:03:18.879921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 [2024-07-20 19:03:18.891878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.828 [2024-07-20 19:03:18.891908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.828 [2024-07-20 19:03:18.891925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 [2024-07-20 19:03:18.905156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.828 [2024-07-20 19:03:18.905200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.828 [2024-07-20 19:03:18.905217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 [2024-07-20 19:03:18.917474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.828 [2024-07-20 19:03:18.917502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.828 [2024-07-20 19:03:18.917538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 [2024-07-20 19:03:18.931106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.828 [2024-07-20 19:03:18.931134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.828 [2024-07-20 19:03:18.931150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 [2024-07-20 19:03:18.944507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.828 [2024-07-20 19:03:18.944550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.828 [2024-07-20 19:03:18.944567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 [2024-07-20 19:03:18.956718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.828 [2024-07-20 19:03:18.956748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.828 [2024-07-20 19:03:18.956765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 [2024-07-20 19:03:18.970130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.828 [2024-07-20 19:03:18.970173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.828 [2024-07-20 19:03:18.970190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 [2024-07-20 19:03:18.982849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.828 [2024-07-20 19:03:18.982878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.828 [2024-07-20 19:03:18.982895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 [2024-07-20 19:03:18.995261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.828 [2024-07-20 19:03:18.995290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.828 [2024-07-20 19:03:18.995307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 [2024-07-20 19:03:19.008703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.828 [2024-07-20 19:03:19.008732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.828 [2024-07-20 19:03:19.008766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 [2024-07-20 19:03:19.021021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.828 [2024-07-20 19:03:19.021051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.828 [2024-07-20 19:03:19.021069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 [2024-07-20 19:03:19.034447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.828 [2024-07-20 19:03:19.034483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.828 [2024-07-20 19:03:19.034516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 [2024-07-20 19:03:19.046695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.828 [2024-07-20 19:03:19.046724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.828 [2024-07-20 19:03:19.046757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 [2024-07-20 19:03:19.058761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.828 [2024-07-20 19:03:19.058812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.828 [2024-07-20 19:03:19.058830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 [2024-07-20 19:03:19.072285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x778360) 00:33:08.828 [2024-07-20 19:03:19.072312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.828 [2024-07-20 19:03:19.072344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.828 00:33:08.828 Latency(us) 00:33:08.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.828 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:08.828 nvme0n1 : 2.00 19788.55 77.30 0.00 0.00 6458.82 2924.85 16990.81 00:33:08.828 =================================================================================================================== 00:33:08.828 Total : 19788.55 77.30 0.00 0.00 6458.82 2924.85 16990.81 00:33:08.828 0 00:33:08.828 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:08.828 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:08.828 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:08.828 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:08.828 | .driver_specific 00:33:08.828 | .nvme_error 00:33:08.828 | .status_code 00:33:08.828 | .command_transient_transport_error' 00:33:09.122 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 155 > 0 )) 00:33:09.122 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1535524 00:33:09.122 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1535524 ']' 00:33:09.122 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1535524 00:33:09.122 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:09.122 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:09.122 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1535524 00:33:09.122 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:09.122 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:09.122 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1535524' 00:33:09.122 killing process with pid 1535524 00:33:09.122 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1535524 00:33:09.122 Received shutdown signal, test time was about 2.000000 seconds 00:33:09.122 00:33:09.122 Latency(us) 00:33:09.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.122 =================================================================================================================== 00:33:09.122 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:09.122 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1535524 00:33:09.381 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:09.381 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:09.381 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:09.381 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:09.381 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:09.381 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1535954 00:33:09.381 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:09.381 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1535954 /var/tmp/bperf.sock 00:33:09.381 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1535954 ']' 00:33:09.381 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:09.381 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:09.381 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:09.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:09.381 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:09.381 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:09.381 [2024-07-20 19:03:19.647827] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:09.381 [2024-07-20 19:03:19.647920] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535954 ] 00:33:09.381 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:09.381 Zero copy mechanism will not be used. 00:33:09.381 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.639 [2024-07-20 19:03:19.713867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.639 [2024-07-20 19:03:19.805205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.639 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:09.639 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:09.639 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:09.639 19:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:09.897 19:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:09.897 19:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.897 19:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:10.155 19:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.155 19:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:10.155 19:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:10.412 nvme0n1 00:33:10.412 19:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:10.412 19:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.412 19:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:10.412 19:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.412 19:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:10.412 19:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:10.671 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:10.671 Zero copy mechanism will not be used. 00:33:10.671 Running I/O for 2 seconds... 00:33:10.671 [2024-07-20 19:03:20.853038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.671 [2024-07-20 19:03:20.853110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.671 [2024-07-20 19:03:20.853146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.671 [2024-07-20 19:03:20.873907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.671 [2024-07-20 19:03:20.873946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.671 [2024-07-20 19:03:20.873964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.671 [2024-07-20 19:03:20.894237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.671 [2024-07-20 19:03:20.894272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.672 [2024-07-20 19:03:20.894292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.672 [2024-07-20 19:03:20.914859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.672 [2024-07-20 19:03:20.914903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.672 [2024-07-20 19:03:20.914922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.672 [2024-07-20 19:03:20.935688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.672 [2024-07-20 19:03:20.935721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.672 [2024-07-20 19:03:20.935740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.672 [2024-07-20 19:03:20.956297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.672 [2024-07-20 19:03:20.956331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.672 [2024-07-20 19:03:20.956350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.672 [2024-07-20 19:03:20.977506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.672 [2024-07-20 19:03:20.977540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.672 [2024-07-20 19:03:20.977559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.930 [2024-07-20 19:03:20.998132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.930 [2024-07-20 19:03:20.998177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.930 [2024-07-20 19:03:20.998198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.930 [2024-07-20 19:03:21.017478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.930 [2024-07-20 19:03:21.017513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.930 [2024-07-20 19:03:21.017533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.930 [2024-07-20 19:03:21.036413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.930 [2024-07-20 19:03:21.036445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.930 [2024-07-20 19:03:21.036464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.930 [2024-07-20 19:03:21.055247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.930 [2024-07-20 19:03:21.055278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.930 [2024-07-20 19:03:21.055297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.930 [2024-07-20 19:03:21.074123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.930 [2024-07-20 19:03:21.074170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.930 [2024-07-20 19:03:21.074189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.930 [2024-07-20 19:03:21.092947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.930 [2024-07-20 19:03:21.092989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.930 [2024-07-20 19:03:21.093006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.930 [2024-07-20 19:03:21.112189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.930 [2024-07-20 19:03:21.112222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.930 [2024-07-20 19:03:21.112241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.930 [2024-07-20 19:03:21.130965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.930 [2024-07-20 19:03:21.130993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.930 [2024-07-20 19:03:21.131033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.930 [2024-07-20 19:03:21.149952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.930 [2024-07-20 19:03:21.149994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.930 [2024-07-20 19:03:21.150011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.930 [2024-07-20 19:03:21.169064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.930 [2024-07-20 19:03:21.169093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.930 [2024-07-20 19:03:21.169125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.930 [2024-07-20 19:03:21.187826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.930 [2024-07-20 19:03:21.187869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.930 [2024-07-20 19:03:21.187885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.930 [2024-07-20 19:03:21.206569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.930 [2024-07-20 19:03:21.206602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.930 [2024-07-20 19:03:21.206621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.930 [2024-07-20 19:03:21.225638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.930 [2024-07-20 19:03:21.225684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.930 [2024-07-20 19:03:21.225703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.930 [2024-07-20 19:03:21.244307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:10.930 [2024-07-20 19:03:21.244340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.930 [2024-07-20 19:03:21.244359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.189 [2024-07-20 19:03:21.263696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.189 [2024-07-20 19:03:21.263742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.189 [2024-07-20 19:03:21.263760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.189 [2024-07-20 19:03:21.282665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.189 [2024-07-20 19:03:21.282693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.189 [2024-07-20 19:03:21.282710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.189 [2024-07-20 19:03:21.301972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.189 [2024-07-20 19:03:21.302001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.189 [2024-07-20 19:03:21.302033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.189 [2024-07-20 19:03:21.321374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.189 [2024-07-20 19:03:21.321418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.189 [2024-07-20 19:03:21.321438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.189 [2024-07-20 19:03:21.340227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.189 [2024-07-20 19:03:21.340274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.189 [2024-07-20 19:03:21.340293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.189 [2024-07-20 19:03:21.359183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.189 [2024-07-20 19:03:21.359230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.189 [2024-07-20 19:03:21.359250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.189 [2024-07-20 19:03:21.378253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.189 [2024-07-20 19:03:21.378287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.189 [2024-07-20 19:03:21.378306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.189 [2024-07-20 19:03:21.397398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.189 [2024-07-20 19:03:21.397444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.189 [2024-07-20 19:03:21.397463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.189 [2024-07-20 19:03:21.416238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.189 [2024-07-20 19:03:21.416284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.189 [2024-07-20 19:03:21.416303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.189 [2024-07-20 19:03:21.435255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.189 [2024-07-20 19:03:21.435301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.189 [2024-07-20 19:03:21.435320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.189 [2024-07-20 19:03:21.454483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.189 [2024-07-20 19:03:21.454528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.189 [2024-07-20 19:03:21.454553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.189 [2024-07-20 19:03:21.473786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.189 [2024-07-20 19:03:21.473828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.189 [2024-07-20 19:03:21.473861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.189 [2024-07-20 19:03:21.493188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.189 [2024-07-20 19:03:21.493232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.189 [2024-07-20 19:03:21.493252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.447 [2024-07-20 19:03:21.512670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.447 [2024-07-20 19:03:21.512700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.447 [2024-07-20 19:03:21.512717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.447 [2024-07-20 19:03:21.532257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.447 [2024-07-20 19:03:21.532300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.447 [2024-07-20 19:03:21.532319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.447 [2024-07-20 19:03:21.551456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.447 [2024-07-20 19:03:21.551488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.447 [2024-07-20 19:03:21.551508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.447 [2024-07-20 19:03:21.570958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.447 [2024-07-20 19:03:21.570985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.447 [2024-07-20 19:03:21.571017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.447 [2024-07-20 19:03:21.590274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.447 [2024-07-20 19:03:21.590320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.447 [2024-07-20 19:03:21.590339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.447 [2024-07-20 19:03:21.609391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.447 [2024-07-20 19:03:21.609436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.447 [2024-07-20 19:03:21.609455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.447 [2024-07-20 19:03:21.628600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.447 [2024-07-20 19:03:21.628639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.447 [2024-07-20 19:03:21.628659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.447 [2024-07-20 19:03:21.647956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.447 [2024-07-20 19:03:21.647983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.447 [2024-07-20 19:03:21.648015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.447 [2024-07-20 19:03:21.667092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.447 [2024-07-20 19:03:21.667119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.447 [2024-07-20 19:03:21.667135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.447 [2024-07-20 19:03:21.686332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.447 [2024-07-20 19:03:21.686375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.448 [2024-07-20 19:03:21.686395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.448 [2024-07-20 19:03:21.705407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.448 [2024-07-20 19:03:21.705452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.448 [2024-07-20 19:03:21.705472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.448 [2024-07-20 19:03:21.724616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.448 [2024-07-20 19:03:21.724648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.448 [2024-07-20 19:03:21.724668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.448 [2024-07-20 19:03:21.744004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.448 [2024-07-20 19:03:21.744032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.448 [2024-07-20 19:03:21.744064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.448 [2024-07-20 19:03:21.763459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.448 [2024-07-20 19:03:21.763491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.448 [2024-07-20 19:03:21.763510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.706 [2024-07-20 19:03:21.783710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.706 [2024-07-20 19:03:21.783741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.706 [2024-07-20 19:03:21.783759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.706 [2024-07-20 19:03:21.802907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.706 [2024-07-20 19:03:21.802935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.706 [2024-07-20 19:03:21.802967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.706 [2024-07-20 19:03:21.822348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.706 [2024-07-20 19:03:21.822380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.707 [2024-07-20 19:03:21.822398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.707 [2024-07-20 19:03:21.841670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.707 [2024-07-20 19:03:21.841717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.707 [2024-07-20 19:03:21.841736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.707 [2024-07-20 19:03:21.860896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.707 [2024-07-20 19:03:21.860940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.707 [2024-07-20 19:03:21.860956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.707 [2024-07-20 19:03:21.879981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.707 [2024-07-20 19:03:21.880010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.707 [2024-07-20 19:03:21.880042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.707 [2024-07-20 19:03:21.899020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.707 [2024-07-20 19:03:21.899050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.707 [2024-07-20 19:03:21.899068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.707 [2024-07-20 19:03:21.918261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.707 [2024-07-20 19:03:21.918307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.707 [2024-07-20 19:03:21.918326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.707 [2024-07-20 19:03:21.937781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.707 [2024-07-20 19:03:21.937823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.707 [2024-07-20 19:03:21.937856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.707 [2024-07-20 19:03:21.956998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.707 [2024-07-20 19:03:21.957033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.707 [2024-07-20 19:03:21.957066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.707 [2024-07-20 19:03:21.976385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.707 [2024-07-20 19:03:21.976419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.707 [2024-07-20 19:03:21.976438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.707 [2024-07-20 19:03:21.995751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.707 [2024-07-20 19:03:21.995813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.707 [2024-07-20 19:03:21.995848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.707 [2024-07-20 19:03:22.014972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.707 [2024-07-20 19:03:22.015001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.707 [2024-07-20 19:03:22.015034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.965 [2024-07-20 19:03:22.034901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.965 [2024-07-20 19:03:22.034944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.965 [2024-07-20 19:03:22.034960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.965 [2024-07-20 19:03:22.054157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.965 [2024-07-20 19:03:22.054190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.965 [2024-07-20 19:03:22.054210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.966 [2024-07-20 19:03:22.073329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.966 [2024-07-20 19:03:22.073373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.966 [2024-07-20 19:03:22.073392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.966 [2024-07-20 19:03:22.092586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.966 [2024-07-20 19:03:22.092627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.966 [2024-07-20 19:03:22.092647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.966 [2024-07-20 19:03:22.112002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.966 [2024-07-20 19:03:22.112031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.966 [2024-07-20 19:03:22.112063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.966 [2024-07-20 19:03:22.131303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.966 [2024-07-20 19:03:22.131346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.966 [2024-07-20 19:03:22.131365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.966 [2024-07-20 19:03:22.150637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.966 [2024-07-20 19:03:22.150669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.966 [2024-07-20 19:03:22.150688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.966 [2024-07-20 19:03:22.169912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.966 [2024-07-20 19:03:22.169941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.966 [2024-07-20 19:03:22.169973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.966 [2024-07-20 19:03:22.189030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.966 [2024-07-20 19:03:22.189058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.966 [2024-07-20 19:03:22.189088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.966 [2024-07-20 19:03:22.207922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.966 [2024-07-20 19:03:22.207964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.966 [2024-07-20 19:03:22.207980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.966 [2024-07-20 19:03:22.226948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.966 [2024-07-20 19:03:22.226976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.966 [2024-07-20 19:03:22.227008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.966 [2024-07-20 19:03:22.245949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.966 [2024-07-20 19:03:22.245976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.966 [2024-07-20 19:03:22.246007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.966 [2024-07-20 19:03:22.264831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.966 [2024-07-20 19:03:22.264859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.966 [2024-07-20 19:03:22.264892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.966 [2024-07-20 19:03:22.283677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:11.966 [2024-07-20 19:03:22.283724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.966 [2024-07-20 19:03:22.283749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:12.224 [2024-07-20 19:03:22.303258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.224 [2024-07-20 19:03:22.303291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.224 [2024-07-20 19:03:22.303309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:12.224 [2024-07-20 19:03:22.323761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.224 [2024-07-20 19:03:22.323843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.224 [2024-07-20 19:03:22.323861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:12.224 [2024-07-20 19:03:22.343188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.224 [2024-07-20 19:03:22.343220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.224 [2024-07-20 19:03:22.343239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:12.224 [2024-07-20 19:03:22.362213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.224 [2024-07-20 19:03:22.362245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.224 [2024-07-20 19:03:22.362264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:12.224 [2024-07-20 19:03:22.381276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.224 [2024-07-20 19:03:22.381307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.224 [2024-07-20 19:03:22.381326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:12.224 [2024-07-20 19:03:22.400203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.224 [2024-07-20 19:03:22.400238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.224 [2024-07-20 19:03:22.400257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:12.224 [2024-07-20 19:03:22.419369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.224 [2024-07-20 19:03:22.419395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.224 [2024-07-20 19:03:22.419410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:12.224 [2024-07-20 19:03:22.438492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.224 [2024-07-20 19:03:22.438538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.224 [2024-07-20 19:03:22.438557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:12.224 [2024-07-20 19:03:22.457521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.224 [2024-07-20 19:03:22.457573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.224 [2024-07-20 19:03:22.457594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:12.224 [2024-07-20 19:03:22.476677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.224 [2024-07-20 19:03:22.476723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.224 [2024-07-20 19:03:22.476742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:12.224 [2024-07-20 19:03:22.495736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.224 [2024-07-20 19:03:22.495783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.224 [2024-07-20 19:03:22.495813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:12.224 [2024-07-20 19:03:22.514873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.224 [2024-07-20 19:03:22.514901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.224 [2024-07-20 19:03:22.514932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:12.224 [2024-07-20 19:03:22.534273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.224 [2024-07-20 19:03:22.534318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.224 [2024-07-20 19:03:22.534337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:12.482 [2024-07-20 19:03:22.553970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.482 [2024-07-20 19:03:22.554015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.482 [2024-07-20 19:03:22.554032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:12.482 [2024-07-20 19:03:22.573227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.482 [2024-07-20 19:03:22.573270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.482 [2024-07-20 19:03:22.573290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:12.482 [2024-07-20 19:03:22.592302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.482 [2024-07-20 19:03:22.592334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.482 [2024-07-20 19:03:22.592353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:12.482 [2024-07-20 19:03:22.611366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.482 [2024-07-20 19:03:22.611398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.482 [2024-07-20 19:03:22.611417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:12.482 [2024-07-20 19:03:22.630379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.482 [2024-07-20 19:03:22.630425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.482 [2024-07-20 19:03:22.630444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:12.482 [2024-07-20 19:03:22.649405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.482 [2024-07-20 19:03:22.649451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.482 [2024-07-20 19:03:22.649470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:12.482 [2024-07-20 19:03:22.668262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.482 [2024-07-20 19:03:22.668295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.482 [2024-07-20 19:03:22.668313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:12.482 [2024-07-20 19:03:22.687429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.482 [2024-07-20 19:03:22.687475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.482 [2024-07-20 19:03:22.687494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:12.482 [2024-07-20 19:03:22.706352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.482 [2024-07-20 19:03:22.706398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.482 [2024-07-20 19:03:22.706416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:12.482 [2024-07-20 19:03:22.725366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.482 [2024-07-20 19:03:22.725399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.482 [2024-07-20 19:03:22.725418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:12.482 [2024-07-20 19:03:22.744564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.482 [2024-07-20 19:03:22.744610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.482 [2024-07-20 19:03:22.744629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:12.482 [2024-07-20 19:03:22.763924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.482 [2024-07-20 19:03:22.763951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.482 [2024-07-20 19:03:22.763983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:12.482 [2024-07-20 19:03:22.782748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.482 [2024-07-20 19:03:22.782807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.482 [2024-07-20 19:03:22.782828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:12.482 [2024-07-20 19:03:22.801771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.482 [2024-07-20 19:03:22.801809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.482 [2024-07-20 19:03:22.801828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:12.740 [2024-07-20 19:03:22.821242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1007d50) 00:33:12.740 [2024-07-20 19:03:22.821275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.740 [2024-07-20 19:03:22.821294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:12.740 00:33:12.740 Latency(us) 00:33:12.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.740 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:12.740 nvme0n1 : 2.00 1603.67 200.46 0.00 0.00 9971.97 9126.49 21165.70 00:33:12.740 =================================================================================================================== 00:33:12.740 Total : 1603.67 200.46 0.00 0.00 9971.97 9126.49 21165.70 00:33:12.740 0 00:33:12.740 19:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:12.740 19:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:12.740 19:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:12.740 | .driver_specific 00:33:12.740 | .nvme_error 00:33:12.740 | .status_code 00:33:12.740 | .command_transient_transport_error' 00:33:12.740 19:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:12.997 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 103 > 0 )) 00:33:12.997 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1535954 00:33:12.997 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1535954 ']' 00:33:12.997 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1535954 00:33:12.997 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:12.997 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:12.997 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1535954 00:33:12.997 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:12.997 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:12.997 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1535954' 00:33:12.997 killing process with pid 1535954 00:33:12.997 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1535954 00:33:12.997 Received shutdown signal, test time was about 2.000000 seconds 00:33:12.997 00:33:12.997 Latency(us) 00:33:12.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.997 =================================================================================================================== 00:33:12.997 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:12.997 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1535954 00:33:13.255 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:13.255 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:13.255 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:13.255 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:13.255 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:13.255 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1536463 00:33:13.255 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:13.255 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1536463 /var/tmp/bperf.sock 00:33:13.255 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1536463 ']' 00:33:13.255 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:13.255 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:13.255 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:13.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:13.255 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:13.255 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:13.255 [2024-07-20 19:03:23.404406] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:13.255 [2024-07-20 19:03:23.404484] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536463 ] 00:33:13.255 EAL: No free 2048 kB hugepages reported on node 1 00:33:13.255 [2024-07-20 19:03:23.466948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.255 [2024-07-20 19:03:23.557317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.513 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:13.513 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:13.513 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:13.513 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:13.770 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:13.770 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.770 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:13.770 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.770 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:13.770 19:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:14.334 nvme0n1 00:33:14.334 19:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:14.334 19:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.334 19:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:14.334 19:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.334 19:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:14.334 19:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:14.334 Running I/O for 2 seconds... 00:33:14.334 [2024-07-20 19:03:24.512697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:14.334 [2024-07-20 19:03:24.513997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.334 [2024-07-20 19:03:24.514040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:14.334 [2024-07-20 19:03:24.525123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:14.334 [2024-07-20 19:03:24.526393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.334 [2024-07-20 19:03:24.526423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.334 [2024-07-20 19:03:24.537320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:14.334 [2024-07-20 19:03:24.538542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.334 [2024-07-20 19:03:24.538571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.335 [2024-07-20 19:03:24.549191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:14.335 [2024-07-20 19:03:24.550441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.335 [2024-07-20 19:03:24.550469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.335 [2024-07-20 19:03:24.561124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:14.335 [2024-07-20 19:03:24.562393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.335 [2024-07-20 19:03:24.562422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.335 [2024-07-20 19:03:24.572955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:14.335 [2024-07-20 19:03:24.574136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.335 [2024-07-20 19:03:24.574166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.335 [2024-07-20 19:03:24.584914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:14.335 [2024-07-20 19:03:24.586080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.335 [2024-07-20 19:03:24.586109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.335 [2024-07-20 19:03:24.596639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:14.335 [2024-07-20 19:03:24.597859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.335 [2024-07-20 19:03:24.597891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.335 [2024-07-20 19:03:24.608503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:14.335 [2024-07-20 19:03:24.609703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.335 [2024-07-20 19:03:24.609732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.335 [2024-07-20 19:03:24.620472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:14.335 [2024-07-20 19:03:24.621692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.335 [2024-07-20 19:03:24.621721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.335 [2024-07-20 19:03:24.632419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:14.335 [2024-07-20 19:03:24.633704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.335 [2024-07-20 19:03:24.633732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.335 [2024-07-20 19:03:24.644286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:14.335 [2024-07-20 19:03:24.645610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.335 [2024-07-20 19:03:24.645639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.335 [2024-07-20 19:03:24.656221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:14.335 [2024-07-20 19:03:24.657587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.335 [2024-07-20 19:03:24.657615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.593 [2024-07-20 19:03:24.668237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:14.593 [2024-07-20 19:03:24.669501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.593 [2024-07-20 19:03:24.669529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.593 [2024-07-20 19:03:24.680091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:14.593 [2024-07-20 19:03:24.681333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.593 [2024-07-20 19:03:24.681360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.593 [2024-07-20 19:03:24.691869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:14.593 [2024-07-20 19:03:24.693082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.593 [2024-07-20 19:03:24.693110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.593 [2024-07-20 19:03:24.703659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:14.593 [2024-07-20 19:03:24.705014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.593 [2024-07-20 19:03:24.705041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.593 [2024-07-20 19:03:24.715512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:14.593 [2024-07-20 19:03:24.716729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.593 [2024-07-20 19:03:24.716757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.593 [2024-07-20 19:03:24.727249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:14.593 [2024-07-20 19:03:24.728552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.593 [2024-07-20 19:03:24.728580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.593 [2024-07-20 19:03:24.739144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:14.593 [2024-07-20 19:03:24.740397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.593 [2024-07-20 19:03:24.740424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.593 [2024-07-20 19:03:24.750934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:14.593 [2024-07-20 19:03:24.752330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.593 [2024-07-20 19:03:24.752361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.593 [2024-07-20 19:03:24.763617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:14.593 [2024-07-20 19:03:24.764990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.593 [2024-07-20 19:03:24.765018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.593 [2024-07-20 19:03:24.776132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:14.593 [2024-07-20 19:03:24.777475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.593 [2024-07-20 19:03:24.777506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.593 [2024-07-20 19:03:24.788844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:14.593 [2024-07-20 19:03:24.790134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.593 [2024-07-20 19:03:24.790165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.593 [2024-07-20 19:03:24.801466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:14.593 [2024-07-20 19:03:24.802768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.593 [2024-07-20 19:03:24.802811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.593 [2024-07-20 19:03:24.814148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:14.593 [2024-07-20 19:03:24.815473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.593 [2024-07-20 19:03:24.815505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.593 [2024-07-20 19:03:24.826791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:14.593 [2024-07-20 19:03:24.828201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.593 [2024-07-20 19:03:24.828231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.593 [2024-07-20 19:03:24.839555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:14.593 [2024-07-20 19:03:24.840881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.593 [2024-07-20 19:03:24.840910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.594 [2024-07-20 19:03:24.852255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:14.594 [2024-07-20 19:03:24.853595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.594 [2024-07-20 19:03:24.853625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.594 [2024-07-20 19:03:24.864989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:14.594 [2024-07-20 19:03:24.866314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.594 [2024-07-20 19:03:24.866345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.594 [2024-07-20 19:03:24.877654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:14.594 [2024-07-20 19:03:24.878978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.594 [2024-07-20 19:03:24.879006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.594 [2024-07-20 19:03:24.890346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:14.594 [2024-07-20 19:03:24.891664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.594 [2024-07-20 19:03:24.891695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.594 [2024-07-20 19:03:24.902997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:14.594 [2024-07-20 19:03:24.904401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.594 [2024-07-20 19:03:24.904432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.594 [2024-07-20 19:03:24.915802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:14.852 [2024-07-20 19:03:24.917196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:24.917227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:24.928670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:14.853 [2024-07-20 19:03:24.930002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:24.930030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:24.941375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:14.853 [2024-07-20 19:03:24.942693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:24.942724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:24.954041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:14.853 [2024-07-20 19:03:24.955341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:24.955371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:24.966525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:14.853 [2024-07-20 19:03:24.967851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:24.967880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:24.979240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:14.853 [2024-07-20 19:03:24.980541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:24.980573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:24.991804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:14.853 [2024-07-20 19:03:24.993255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:24.993286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:25.004550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:14.853 [2024-07-20 19:03:25.005894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:25.005922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:25.017232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:14.853 [2024-07-20 19:03:25.018539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:25.018570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:25.029615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:14.853 [2024-07-20 19:03:25.030959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:25.030987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:25.042270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:14.853 [2024-07-20 19:03:25.043557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:25.043588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:25.054922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:14.853 [2024-07-20 19:03:25.056258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:25.056289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:25.067533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:14.853 [2024-07-20 19:03:25.068847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:25.068874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:25.080278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:14.853 [2024-07-20 19:03:25.081588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:25.081619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:25.092955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:14.853 [2024-07-20 19:03:25.094267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:25.094298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:25.105599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:14.853 [2024-07-20 19:03:25.106952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:25.106980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:25.118336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:14.853 [2024-07-20 19:03:25.119641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:25.119672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:25.130947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:14.853 [2024-07-20 19:03:25.132344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:25.132380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:25.143667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:14.853 [2024-07-20 19:03:25.145000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:25.145028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:25.156334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:14.853 [2024-07-20 19:03:25.157647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:25.157678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.853 [2024-07-20 19:03:25.169008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:14.853 [2024-07-20 19:03:25.170321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.853 [2024-07-20 19:03:25.170352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.112 [2024-07-20 19:03:25.181957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.112 [2024-07-20 19:03:25.183343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.112 [2024-07-20 19:03:25.183373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.112 [2024-07-20 19:03:25.194575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.112 [2024-07-20 19:03:25.195902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.112 [2024-07-20 19:03:25.195929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.112 [2024-07-20 19:03:25.207291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.112 [2024-07-20 19:03:25.208593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.112 [2024-07-20 19:03:25.208624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.112 [2024-07-20 19:03:25.219977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.112 [2024-07-20 19:03:25.221296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.112 [2024-07-20 19:03:25.221327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.112 [2024-07-20 19:03:25.232612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.112 [2024-07-20 19:03:25.233949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.112 [2024-07-20 19:03:25.233977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.112 [2024-07-20 19:03:25.245318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.113 [2024-07-20 19:03:25.246641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.246678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.113 [2024-07-20 19:03:25.258018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.113 [2024-07-20 19:03:25.259356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.259387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.113 [2024-07-20 19:03:25.270672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.113 [2024-07-20 19:03:25.272006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.272034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.113 [2024-07-20 19:03:25.283224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.113 [2024-07-20 19:03:25.284525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.284557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.113 [2024-07-20 19:03:25.295850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.113 [2024-07-20 19:03:25.297241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.297273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.113 [2024-07-20 19:03:25.308555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.113 [2024-07-20 19:03:25.309887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.309916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.113 [2024-07-20 19:03:25.321316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.113 [2024-07-20 19:03:25.322632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.322663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.113 [2024-07-20 19:03:25.333958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.113 [2024-07-20 19:03:25.335290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.335321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.113 [2024-07-20 19:03:25.346292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.113 [2024-07-20 19:03:25.347639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.347666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.113 [2024-07-20 19:03:25.358698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.113 [2024-07-20 19:03:25.360004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.360032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.113 [2024-07-20 19:03:25.371052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.113 [2024-07-20 19:03:25.372348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.372375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.113 [2024-07-20 19:03:25.383325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.113 [2024-07-20 19:03:25.384619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.384647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.113 [2024-07-20 19:03:25.395650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.113 [2024-07-20 19:03:25.396956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.396984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.113 [2024-07-20 19:03:25.407918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.113 [2024-07-20 19:03:25.409234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.409261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.113 [2024-07-20 19:03:25.420250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.113 [2024-07-20 19:03:25.421533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.421561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.113 [2024-07-20 19:03:25.432617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.113 [2024-07-20 19:03:25.434074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.113 [2024-07-20 19:03:25.434102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.372 [2024-07-20 19:03:25.445364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.372 [2024-07-20 19:03:25.446666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.372 [2024-07-20 19:03:25.446693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.372 [2024-07-20 19:03:25.457500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.372 [2024-07-20 19:03:25.458843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.372 [2024-07-20 19:03:25.458871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.372 [2024-07-20 19:03:25.469959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.372 [2024-07-20 19:03:25.471258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.372 [2024-07-20 19:03:25.471286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.372 [2024-07-20 19:03:25.482035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.372 [2024-07-20 19:03:25.483358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.372 [2024-07-20 19:03:25.483385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.372 [2024-07-20 19:03:25.494113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.372 [2024-07-20 19:03:25.495425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.495451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.506392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.373 [2024-07-20 19:03:25.507690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.507733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.518775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.373 [2024-07-20 19:03:25.520098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.520144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.531412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.373 [2024-07-20 19:03:25.532749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.532779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.544063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.373 [2024-07-20 19:03:25.545419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.545448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.556630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.373 [2024-07-20 19:03:25.557969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.557997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.569237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.373 [2024-07-20 19:03:25.570543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.570578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.581867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.373 [2024-07-20 19:03:25.583244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.583274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.594357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.373 [2024-07-20 19:03:25.595668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.595698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.606906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.373 [2024-07-20 19:03:25.608348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.608377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.619428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.373 [2024-07-20 19:03:25.620674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.620703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.631739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.373 [2024-07-20 19:03:25.633006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.633034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.643542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.373 [2024-07-20 19:03:25.644806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.644834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.655246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.373 [2024-07-20 19:03:25.656545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.656577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.667910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.373 [2024-07-20 19:03:25.669370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.669401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.680521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.373 [2024-07-20 19:03:25.681847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.681875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.373 [2024-07-20 19:03:25.693387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.373 [2024-07-20 19:03:25.694758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.373 [2024-07-20 19:03:25.694790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.706281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.633 [2024-07-20 19:03:25.707584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.707615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.718955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.633 [2024-07-20 19:03:25.720281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.720312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.731570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.633 [2024-07-20 19:03:25.732917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.732945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.744266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.633 [2024-07-20 19:03:25.745575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.745605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.756917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.633 [2024-07-20 19:03:25.758115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.758143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.769370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.633 [2024-07-20 19:03:25.770694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.770725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.781967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.633 [2024-07-20 19:03:25.783275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.783303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.793743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.633 [2024-07-20 19:03:25.795014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.795043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.805536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.633 [2024-07-20 19:03:25.806740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.806767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.817538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.633 [2024-07-20 19:03:25.818810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.818848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.829446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.633 [2024-07-20 19:03:25.830750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.830778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.841222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.633 [2024-07-20 19:03:25.842475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.842502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.853171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.633 [2024-07-20 19:03:25.854382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.854410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.864921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.633 [2024-07-20 19:03:25.866118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.866146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.876645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.633 [2024-07-20 19:03:25.877874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.877901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.888468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.633 [2024-07-20 19:03:25.889770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.889814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.900344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.633 [2024-07-20 19:03:25.901681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.901709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.912276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.633 [2024-07-20 19:03:25.913585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.913613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.924019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.633 [2024-07-20 19:03:25.925232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.925260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.935707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.633 [2024-07-20 19:03:25.936917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.936945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.633 [2024-07-20 19:03:25.947444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.633 [2024-07-20 19:03:25.948681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.633 [2024-07-20 19:03:25.948709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:25.959696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.893 [2024-07-20 19:03:25.961004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:25.961031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:25.971480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.893 [2024-07-20 19:03:25.972719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:25.972746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:25.983377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.893 [2024-07-20 19:03:25.984597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:25.984624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:25.995277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.893 [2024-07-20 19:03:25.996541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:25.996569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.007116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.893 [2024-07-20 19:03:26.008353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.008381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.018837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.893 [2024-07-20 19:03:26.020047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.020075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.030511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.893 [2024-07-20 19:03:26.031756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.031785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.042255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.893 [2024-07-20 19:03:26.043598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.043626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.053909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.893 [2024-07-20 19:03:26.055081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.055108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.065672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.893 [2024-07-20 19:03:26.066952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.066980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.077457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.893 [2024-07-20 19:03:26.078769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.078803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.089255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.893 [2024-07-20 19:03:26.090568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.090595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.100951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.893 [2024-07-20 19:03:26.102181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.102209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.112564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.893 [2024-07-20 19:03:26.113858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.113886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.124319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.893 [2024-07-20 19:03:26.125618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.125646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.136055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.893 [2024-07-20 19:03:26.137262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.137290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.147861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.893 [2024-07-20 19:03:26.149059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.149088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.159470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:15.893 [2024-07-20 19:03:26.160701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.160728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.171268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:15.893 [2024-07-20 19:03:26.172549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.172577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.182972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:15.893 [2024-07-20 19:03:26.184244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.893 [2024-07-20 19:03:26.184272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.893 [2024-07-20 19:03:26.194723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:15.894 [2024-07-20 19:03:26.195976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.894 [2024-07-20 19:03:26.196017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:15.894 [2024-07-20 19:03:26.206458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:15.894 [2024-07-20 19:03:26.207702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:15.894 [2024-07-20 19:03:26.207730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.218481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:16.153 [2024-07-20 19:03:26.219943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.219971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.230389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:16.153 [2024-07-20 19:03:26.231618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.231646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.242126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:16.153 [2024-07-20 19:03:26.243355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.243383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.253930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:16.153 [2024-07-20 19:03:26.255181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.255209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.265678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:16.153 [2024-07-20 19:03:26.266947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.266975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.277465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:16.153 [2024-07-20 19:03:26.278702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.278729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.289319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:16.153 [2024-07-20 19:03:26.290601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.290629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.301171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:16.153 [2024-07-20 19:03:26.302481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.302509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.312930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:16.153 [2024-07-20 19:03:26.314116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.314143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.324627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:16.153 [2024-07-20 19:03:26.325828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.325856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.336418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:16.153 [2024-07-20 19:03:26.337652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.337680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.348252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:16.153 [2024-07-20 19:03:26.349514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.349542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.360035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:16.153 [2024-07-20 19:03:26.361280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.361308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.371824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:16.153 [2024-07-20 19:03:26.373022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.373050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.383565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:16.153 [2024-07-20 19:03:26.384811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.384840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.395394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:16.153 [2024-07-20 19:03:26.396612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.396640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.153 [2024-07-20 19:03:26.407269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:16.153 [2024-07-20 19:03:26.408552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.153 [2024-07-20 19:03:26.408579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.154 [2024-07-20 19:03:26.419044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:16.154 [2024-07-20 19:03:26.420238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.154 [2024-07-20 19:03:26.420266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.154 [2024-07-20 19:03:26.430744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:16.154 [2024-07-20 19:03:26.432001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.154 [2024-07-20 19:03:26.432029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.154 [2024-07-20 19:03:26.442463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:16.154 [2024-07-20 19:03:26.443741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.154 [2024-07-20 19:03:26.443770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.154 [2024-07-20 19:03:26.454239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ed0b0 00:33:16.154 [2024-07-20 19:03:26.455482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.154 [2024-07-20 19:03:26.455510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.154 [2024-07-20 19:03:26.465971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f8618 00:33:16.154 [2024-07-20 19:03:26.467317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.154 [2024-07-20 19:03:26.467345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.413 [2024-07-20 19:03:26.478048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f1ca0 00:33:16.413 [2024-07-20 19:03:26.479393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.413 [2024-07-20 19:03:26.479421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.413 [2024-07-20 19:03:26.490006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190f3e60 00:33:16.413 [2024-07-20 19:03:26.491260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.413 [2024-07-20 19:03:26.491288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.413 [2024-07-20 19:03:26.501706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430bc0) with pdu=0x2000190ff3c8 00:33:16.413 [2024-07-20 19:03:26.503003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.413 [2024-07-20 19:03:26.503038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.413 00:33:16.413 Latency(us) 00:33:16.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.413 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:16.413 nvme0n1 : 2.01 20891.27 81.61 0.00 0.00 6117.27 2815.62 13689.74 00:33:16.413 =================================================================================================================== 00:33:16.413 Total : 20891.27 81.61 0.00 0.00 6117.27 2815.62 13689.74 00:33:16.413 0 00:33:16.413 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:16.413 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:16.413 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:16.413 | .driver_specific 00:33:16.413 | .nvme_error 00:33:16.413 | .status_code 00:33:16.413 | .command_transient_transport_error' 00:33:16.413 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:16.672 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:33:16.672 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1536463 00:33:16.672 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1536463 ']' 00:33:16.672 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1536463 00:33:16.672 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:16.672 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:16.672 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1536463 00:33:16.672 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:16.672 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:16.672 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1536463' 00:33:16.672 killing process with pid 1536463 00:33:16.672 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1536463 00:33:16.672 Received shutdown signal, test time was about 2.000000 seconds 00:33:16.672 00:33:16.672 Latency(us) 00:33:16.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.672 =================================================================================================================== 00:33:16.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:16.672 19:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1536463 00:33:16.931 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:16.931 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:16.931 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:16.931 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:16.931 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:16.931 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1536868 00:33:16.931 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:16.931 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1536868 /var/tmp/bperf.sock 00:33:16.931 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1536868 ']' 00:33:16.931 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:16.931 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:16.931 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:16.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:16.931 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:16.931 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:16.931 [2024-07-20 19:03:27.080591] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:16.931 [2024-07-20 19:03:27.080667] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536868 ] 00:33:16.931 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:16.931 Zero copy mechanism will not be used. 00:33:16.931 EAL: No free 2048 kB hugepages reported on node 1 00:33:16.931 [2024-07-20 19:03:27.142348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.931 [2024-07-20 19:03:27.231389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.190 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:17.190 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:17.190 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:17.190 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:17.449 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:17.449 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.449 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:17.449 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.449 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:17.449 19:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:18.020 nvme0n1 00:33:18.020 19:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:18.020 19:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.020 19:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:18.020 19:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.020 19:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:18.020 19:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:18.020 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:18.020 Zero copy mechanism will not be used. 00:33:18.020 Running I/O for 2 seconds... 00:33:18.020 [2024-07-20 19:03:28.300034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.020 [2024-07-20 19:03:28.301136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.020 [2024-07-20 19:03:28.301194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.020 [2024-07-20 19:03:28.332015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.020 [2024-07-20 19:03:28.333012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.020 [2024-07-20 19:03:28.333061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.280 [2024-07-20 19:03:28.366053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.280 [2024-07-20 19:03:28.366953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.280 [2024-07-20 19:03:28.366984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.280 [2024-07-20 19:03:28.399539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.280 [2024-07-20 19:03:28.400431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.280 [2024-07-20 19:03:28.400461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.280 [2024-07-20 19:03:28.434366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.280 [2024-07-20 19:03:28.435483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.280 [2024-07-20 19:03:28.435514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.280 [2024-07-20 19:03:28.469120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.280 [2024-07-20 19:03:28.469875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.280 [2024-07-20 19:03:28.469905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.280 [2024-07-20 19:03:28.504134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.280 [2024-07-20 19:03:28.505260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.280 [2024-07-20 19:03:28.505289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.281 [2024-07-20 19:03:28.538843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.281 [2024-07-20 19:03:28.539843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-07-20 19:03:28.539874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.281 [2024-07-20 19:03:28.572080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.281 [2024-07-20 19:03:28.572569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.281 [2024-07-20 19:03:28.572599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.541 [2024-07-20 19:03:28.604976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.541 [2024-07-20 19:03:28.605880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.541 [2024-07-20 19:03:28.605912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.541 [2024-07-20 19:03:28.641403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.541 [2024-07-20 19:03:28.642154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.541 [2024-07-20 19:03:28.642198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.541 [2024-07-20 19:03:28.676826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.541 [2024-07-20 19:03:28.677643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.541 [2024-07-20 19:03:28.677673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.541 [2024-07-20 19:03:28.711751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.541 [2024-07-20 19:03:28.712610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.541 [2024-07-20 19:03:28.712640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.541 [2024-07-20 19:03:28.745451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.541 [2024-07-20 19:03:28.746552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.541 [2024-07-20 19:03:28.746582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.541 [2024-07-20 19:03:28.778523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.541 [2024-07-20 19:03:28.778988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.541 [2024-07-20 19:03:28.779019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.541 [2024-07-20 19:03:28.813771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.541 [2024-07-20 19:03:28.814664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.541 [2024-07-20 19:03:28.814692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.541 [2024-07-20 19:03:28.847827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.541 [2024-07-20 19:03:28.848680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.541 [2024-07-20 19:03:28.848720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.801 [2024-07-20 19:03:28.885591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.801 [2024-07-20 19:03:28.886558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.801 [2024-07-20 19:03:28.886597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.801 [2024-07-20 19:03:28.919905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.801 [2024-07-20 19:03:28.920612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.801 [2024-07-20 19:03:28.920641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.801 [2024-07-20 19:03:28.949534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.801 [2024-07-20 19:03:28.950429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.801 [2024-07-20 19:03:28.950458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.801 [2024-07-20 19:03:28.982884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.801 [2024-07-20 19:03:28.983962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.801 [2024-07-20 19:03:28.983993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.801 [2024-07-20 19:03:29.017554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.801 [2024-07-20 19:03:29.018313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.801 [2024-07-20 19:03:29.018342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.801 [2024-07-20 19:03:29.051765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.801 [2024-07-20 19:03:29.052735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.802 [2024-07-20 19:03:29.052766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.802 [2024-07-20 19:03:29.083945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.802 [2024-07-20 19:03:29.084434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.802 [2024-07-20 19:03:29.084463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.802 [2024-07-20 19:03:29.117393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:18.802 [2024-07-20 19:03:29.118125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.802 [2024-07-20 19:03:29.118156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.062 [2024-07-20 19:03:29.149988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.062 [2024-07-20 19:03:29.150715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.062 [2024-07-20 19:03:29.150745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.062 [2024-07-20 19:03:29.185893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.062 [2024-07-20 19:03:29.186499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.062 [2024-07-20 19:03:29.186528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.062 [2024-07-20 19:03:29.219275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.062 [2024-07-20 19:03:29.220260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.062 [2024-07-20 19:03:29.220303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.062 [2024-07-20 19:03:29.253782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.062 [2024-07-20 19:03:29.254515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.062 [2024-07-20 19:03:29.254543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.062 [2024-07-20 19:03:29.289552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.062 [2024-07-20 19:03:29.290554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.062 [2024-07-20 19:03:29.290584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.062 [2024-07-20 19:03:29.325467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.062 [2024-07-20 19:03:29.326239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.062 [2024-07-20 19:03:29.326283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.062 [2024-07-20 19:03:29.357901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.062 [2024-07-20 19:03:29.358519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.062 [2024-07-20 19:03:29.358549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.325 [2024-07-20 19:03:29.393163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.325 [2024-07-20 19:03:29.394127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.325 [2024-07-20 19:03:29.394158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.325 [2024-07-20 19:03:29.428258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.325 [2024-07-20 19:03:29.428947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.325 [2024-07-20 19:03:29.428978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.325 [2024-07-20 19:03:29.462808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.325 [2024-07-20 19:03:29.463622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.325 [2024-07-20 19:03:29.463654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.325 [2024-07-20 19:03:29.498042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.325 [2024-07-20 19:03:29.498704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.325 [2024-07-20 19:03:29.498737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.325 [2024-07-20 19:03:29.529943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.325 [2024-07-20 19:03:29.530764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.325 [2024-07-20 19:03:29.530805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.325 [2024-07-20 19:03:29.565743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.325 [2024-07-20 19:03:29.566285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.325 [2024-07-20 19:03:29.566318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.325 [2024-07-20 19:03:29.599543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.325 [2024-07-20 19:03:29.600364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.325 [2024-07-20 19:03:29.600397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.325 [2024-07-20 19:03:29.636940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.325 [2024-07-20 19:03:29.637986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.325 [2024-07-20 19:03:29.638028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.599 [2024-07-20 19:03:29.674151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.599 [2024-07-20 19:03:29.675223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.599 [2024-07-20 19:03:29.675253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.599 [2024-07-20 19:03:29.710043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.599 [2024-07-20 19:03:29.711000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.599 [2024-07-20 19:03:29.711030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.599 [2024-07-20 19:03:29.740461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.599 [2024-07-20 19:03:29.741162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.599 [2024-07-20 19:03:29.741196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.599 [2024-07-20 19:03:29.779231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.599 [2024-07-20 19:03:29.780330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.599 [2024-07-20 19:03:29.780369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.599 [2024-07-20 19:03:29.819379] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.599 [2024-07-20 19:03:29.820198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.599 [2024-07-20 19:03:29.820231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.599 [2024-07-20 19:03:29.855014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.599 [2024-07-20 19:03:29.855871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.599 [2024-07-20 19:03:29.855915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.599 [2024-07-20 19:03:29.894908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.599 [2024-07-20 19:03:29.895861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.599 [2024-07-20 19:03:29.895891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.862 [2024-07-20 19:03:29.932318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.862 [2024-07-20 19:03:29.933411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.862 [2024-07-20 19:03:29.933441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.862 [2024-07-20 19:03:29.969592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.862 [2024-07-20 19:03:29.970123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.862 [2024-07-20 19:03:29.970168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.862 [2024-07-20 19:03:30.005825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.862 [2024-07-20 19:03:30.006948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.862 [2024-07-20 19:03:30.006980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.862 [2024-07-20 19:03:30.042997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.862 [2024-07-20 19:03:30.043665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.862 [2024-07-20 19:03:30.043710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.862 [2024-07-20 19:03:30.072522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.862 [2024-07-20 19:03:30.073487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.862 [2024-07-20 19:03:30.073526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.862 [2024-07-20 19:03:30.103676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.862 [2024-07-20 19:03:30.104493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.862 [2024-07-20 19:03:30.104526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.862 [2024-07-20 19:03:30.138449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.862 [2024-07-20 19:03:30.139248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.862 [2024-07-20 19:03:30.139282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.862 [2024-07-20 19:03:30.172418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:19.862 [2024-07-20 19:03:30.173489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.862 [2024-07-20 19:03:30.173522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.122 [2024-07-20 19:03:30.209972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:20.122 [2024-07-20 19:03:30.211038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.122 [2024-07-20 19:03:30.211068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.122 [2024-07-20 19:03:30.247761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:20.122 [2024-07-20 19:03:30.248692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.122 [2024-07-20 19:03:30.248725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.122 [2024-07-20 19:03:30.282630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1430e90) with pdu=0x2000190fef90 00:33:20.122 [2024-07-20 19:03:30.283579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.122 [2024-07-20 19:03:30.283611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.122 00:33:20.122 Latency(us) 00:33:20.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.122 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:20.122 nvme0n1 : 2.02 891.39 111.42 0.00 0.00 17861.36 8301.23 41360.50 00:33:20.122 =================================================================================================================== 00:33:20.122 Total : 891.39 111.42 0.00 0.00 17861.36 8301.23 41360.50 00:33:20.122 0 00:33:20.122 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:20.122 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:20.122 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:20.122 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:20.122 | .driver_specific 00:33:20.122 | .nvme_error 00:33:20.122 | .status_code 00:33:20.122 | .command_transient_transport_error' 00:33:20.381 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 58 > 0 )) 00:33:20.381 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1536868 00:33:20.381 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1536868 ']' 00:33:20.381 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1536868 00:33:20.381 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:20.381 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:20.381 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1536868 00:33:20.381 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:20.381 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:20.381 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1536868' 00:33:20.381 killing process with pid 1536868 00:33:20.381 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1536868 00:33:20.381 Received shutdown signal, test time was about 2.000000 seconds 00:33:20.381 00:33:20.381 Latency(us) 00:33:20.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.381 =================================================================================================================== 00:33:20.381 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:20.381 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1536868 00:33:20.640 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1535504 00:33:20.640 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1535504 ']' 00:33:20.640 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1535504 00:33:20.640 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:20.640 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:20.640 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1535504 00:33:20.640 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:20.640 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:20.640 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1535504' 00:33:20.640 killing process with pid 1535504 00:33:20.640 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1535504 00:33:20.640 19:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1535504 00:33:20.898 00:33:20.898 real 0m15.564s 00:33:20.898 user 0m31.682s 00:33:20.898 sys 0m3.699s 00:33:20.898 19:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:20.898 19:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:20.898 ************************************ 00:33:20.898 END TEST nvmf_digest_error 00:33:20.898 ************************************ 00:33:20.898 19:03:31 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:20.898 19:03:31 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:20.898 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:20.898 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:20.898 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:20.898 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:20.898 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:20.898 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:20.899 rmmod nvme_tcp 00:33:20.899 rmmod nvme_fabrics 00:33:20.899 rmmod nvme_keyring 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1535504 ']' 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1535504 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 1535504 ']' 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 1535504 00:33:20.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1535504) - No such process 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 1535504 is not found' 00:33:20.899 Process with pid 1535504 is not found 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:20.899 19:03:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.830 19:03:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:22.830 00:33:22.830 real 0m34.961s 00:33:22.830 user 1m3.117s 00:33:22.830 sys 0m8.810s 00:33:22.830 19:03:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:22.830 19:03:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:22.830 ************************************ 00:33:22.830 END TEST nvmf_digest 00:33:22.830 ************************************ 00:33:22.830 19:03:33 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:22.830 19:03:33 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:22.830 19:03:33 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:22.830 19:03:33 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:22.830 19:03:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:22.830 19:03:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:22.830 19:03:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:23.089 ************************************ 00:33:23.089 START TEST nvmf_bdevperf 00:33:23.089 ************************************ 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:23.089 * Looking for test storage... 00:33:23.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:23.089 19:03:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:24.989 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:24.989 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:24.989 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:24.989 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:24.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:24.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:33:24.989 00:33:24.989 --- 10.0.0.2 ping statistics --- 00:33:24.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.989 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:24.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:24.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:33:24.989 00:33:24.989 --- 10.0.0.1 ping statistics --- 00:33:24.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.989 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:24.989 19:03:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:24.990 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:24.990 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:24.990 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.990 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1539216 00:33:24.990 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:24.990 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1539216 00:33:24.990 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1539216 ']' 00:33:24.990 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.990 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:24.990 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.990 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:24.990 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:25.247 [2024-07-20 19:03:35.338081] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:25.247 [2024-07-20 19:03:35.338191] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:25.247 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.247 [2024-07-20 19:03:35.404073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:25.247 [2024-07-20 19:03:35.490777] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:25.247 [2024-07-20 19:03:35.490836] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:25.247 [2024-07-20 19:03:35.490864] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:25.247 [2024-07-20 19:03:35.490876] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:25.247 [2024-07-20 19:03:35.490886] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:25.247 [2024-07-20 19:03:35.490969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:25.247 [2024-07-20 19:03:35.491048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:25.247 [2024-07-20 19:03:35.491051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:25.504 [2024-07-20 19:03:35.625146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:25.504 Malloc0 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:25.504 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.505 19:03:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:25.505 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.505 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:25.505 [2024-07-20 19:03:35.688394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.505 19:03:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.505 19:03:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:25.505 19:03:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:25.505 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:25.505 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:25.505 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:25.505 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:25.505 { 00:33:25.505 "params": { 00:33:25.505 "name": "Nvme$subsystem", 00:33:25.505 "trtype": "$TEST_TRANSPORT", 00:33:25.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:25.505 "adrfam": "ipv4", 00:33:25.505 "trsvcid": "$NVMF_PORT", 00:33:25.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:25.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:25.505 "hdgst": ${hdgst:-false}, 00:33:25.505 "ddgst": ${ddgst:-false} 00:33:25.505 }, 00:33:25.505 "method": "bdev_nvme_attach_controller" 00:33:25.505 } 00:33:25.505 EOF 00:33:25.505 )") 00:33:25.505 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:25.505 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:25.505 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:25.505 19:03:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:25.505 "params": { 00:33:25.505 "name": "Nvme1", 00:33:25.505 "trtype": "tcp", 00:33:25.505 "traddr": "10.0.0.2", 00:33:25.505 "adrfam": "ipv4", 00:33:25.505 "trsvcid": "4420", 00:33:25.505 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:25.505 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:25.505 "hdgst": false, 00:33:25.505 "ddgst": false 00:33:25.505 }, 00:33:25.505 "method": "bdev_nvme_attach_controller" 00:33:25.505 }' 00:33:25.505 [2024-07-20 19:03:35.733414] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:25.505 [2024-07-20 19:03:35.733489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1539359 ] 00:33:25.505 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.505 [2024-07-20 19:03:35.793753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.762 [2024-07-20 19:03:35.881122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.019 Running I/O for 1 seconds... 00:33:26.951 00:33:26.951 Latency(us) 00:33:26.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.951 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:26.951 Verification LBA range: start 0x0 length 0x4000 00:33:26.951 Nvme1n1 : 1.01 9282.25 36.26 0.00 0.00 13717.31 2233.08 21359.88 00:33:26.951 =================================================================================================================== 00:33:26.951 Total : 9282.25 36.26 0.00 0.00 13717.31 2233.08 21359.88 00:33:27.209 19:03:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1539503 00:33:27.209 19:03:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:27.209 19:03:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:27.209 19:03:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:27.209 19:03:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:27.209 19:03:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:27.209 19:03:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:27.209 19:03:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:27.209 { 00:33:27.209 "params": { 00:33:27.209 "name": "Nvme$subsystem", 00:33:27.209 "trtype": "$TEST_TRANSPORT", 00:33:27.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:27.209 "adrfam": "ipv4", 00:33:27.209 "trsvcid": "$NVMF_PORT", 00:33:27.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:27.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:27.209 "hdgst": ${hdgst:-false}, 00:33:27.209 "ddgst": ${ddgst:-false} 00:33:27.209 }, 00:33:27.209 "method": "bdev_nvme_attach_controller" 00:33:27.209 } 00:33:27.209 EOF 00:33:27.209 )") 00:33:27.209 19:03:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:27.209 19:03:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:27.209 19:03:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:27.209 19:03:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:27.209 "params": { 00:33:27.209 "name": "Nvme1", 00:33:27.209 "trtype": "tcp", 00:33:27.209 "traddr": "10.0.0.2", 00:33:27.209 "adrfam": "ipv4", 00:33:27.209 "trsvcid": "4420", 00:33:27.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:27.209 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:27.209 "hdgst": false, 00:33:27.209 "ddgst": false 00:33:27.209 }, 00:33:27.209 "method": "bdev_nvme_attach_controller" 00:33:27.209 }' 00:33:27.209 [2024-07-20 19:03:37.478642] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:27.209 [2024-07-20 19:03:37.478732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1539503 ] 00:33:27.209 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.466 [2024-07-20 19:03:37.540514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.466 [2024-07-20 19:03:37.623656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.723 Running I/O for 15 seconds... 00:33:30.277 19:03:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1539216 00:33:30.277 19:03:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:30.277 [2024-07-20 19:03:40.449993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.277 [2024-07-20 19:03:40.450047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.277 [2024-07-20 19:03:40.450115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.277 [2024-07-20 19:03:40.450153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.277 [2024-07-20 19:03:40.450188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.277 [2024-07-20 19:03:40.450222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.277 [2024-07-20 19:03:40.450256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.277 [2024-07-20 19:03:40.450291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.450982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.450996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.277 [2024-07-20 19:03:40.451405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.277 [2024-07-20 19:03:40.451437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.277 [2024-07-20 19:03:40.451469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.277 [2024-07-20 19:03:40.451502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.277 [2024-07-20 19:03:40.451750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.277 [2024-07-20 19:03:40.451766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.451783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.451807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.451825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.451858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.451874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.451888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.451903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.451917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.451933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.451946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.451961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.451975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.451990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.452979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.452993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.278 [2024-07-20 19:03:40.453745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.278 [2024-07-20 19:03:40.453760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.453777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.279 [2024-07-20 19:03:40.453800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.453819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.279 [2024-07-20 19:03:40.453860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.453877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.279 [2024-07-20 19:03:40.453891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.453906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.279 [2024-07-20 19:03:40.453920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.453935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.279 [2024-07-20 19:03:40.453949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.453964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.279 [2024-07-20 19:03:40.453978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.453993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.279 [2024-07-20 19:03:40.454007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.454022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.279 [2024-07-20 19:03:40.454036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.454051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.279 [2024-07-20 19:03:40.454064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.454096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.279 [2024-07-20 19:03:40.454111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.454128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.279 [2024-07-20 19:03:40.454144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.454160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.279 [2024-07-20 19:03:40.454175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.454192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.279 [2024-07-20 19:03:40.454207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.454224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.279 [2024-07-20 19:03:40.454239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.454262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:54488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.279 [2024-07-20 19:03:40.454278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.454295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.279 [2024-07-20 19:03:40.454310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.454327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.279 [2024-07-20 19:03:40.454342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.454359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.279 [2024-07-20 19:03:40.454375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.454390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x275f9a0 is same with the state(5) to be set 00:33:30.279 [2024-07-20 19:03:40.454409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:30.279 [2024-07-20 19:03:40.454421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:30.279 [2024-07-20 19:03:40.454435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54520 len:8 PRP1 0x0 PRP2 0x0 00:33:30.279 [2024-07-20 19:03:40.454449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.279 [2024-07-20 19:03:40.454516] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x275f9a0 was disconnected and freed. reset controller. 00:33:30.279 [2024-07-20 19:03:40.458165] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.279 [2024-07-20 19:03:40.458239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.279 [2024-07-20 19:03:40.459022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.279 [2024-07-20 19:03:40.459053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.279 [2024-07-20 19:03:40.459069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.279 [2024-07-20 19:03:40.459331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.279 [2024-07-20 19:03:40.459574] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.279 [2024-07-20 19:03:40.459599] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.279 [2024-07-20 19:03:40.459617] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.279 [2024-07-20 19:03:40.463426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.279 [2024-07-20 19:03:40.472366] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.279 [2024-07-20 19:03:40.472865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.279 [2024-07-20 19:03:40.472895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.279 [2024-07-20 19:03:40.472911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.279 [2024-07-20 19:03:40.473156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.279 [2024-07-20 19:03:40.473400] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.279 [2024-07-20 19:03:40.473424] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.279 [2024-07-20 19:03:40.473440] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.279 [2024-07-20 19:03:40.477022] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.279 [2024-07-20 19:03:40.486318] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.279 [2024-07-20 19:03:40.486833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.279 [2024-07-20 19:03:40.486865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.279 [2024-07-20 19:03:40.486884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.279 [2024-07-20 19:03:40.487129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.279 [2024-07-20 19:03:40.487372] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.279 [2024-07-20 19:03:40.487396] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.279 [2024-07-20 19:03:40.487411] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.279 [2024-07-20 19:03:40.491001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.279 [2024-07-20 19:03:40.500280] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.279 [2024-07-20 19:03:40.500971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.279 [2024-07-20 19:03:40.501002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.279 [2024-07-20 19:03:40.501019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.279 [2024-07-20 19:03:40.501258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.279 [2024-07-20 19:03:40.501501] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.279 [2024-07-20 19:03:40.501525] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.279 [2024-07-20 19:03:40.501541] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.279 [2024-07-20 19:03:40.505122] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.279 [2024-07-20 19:03:40.514028] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.279 [2024-07-20 19:03:40.514583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.279 [2024-07-20 19:03:40.514614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.279 [2024-07-20 19:03:40.514632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.279 [2024-07-20 19:03:40.514891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.279 [2024-07-20 19:03:40.515124] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.279 [2024-07-20 19:03:40.515160] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.279 [2024-07-20 19:03:40.515184] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.279 [2024-07-20 19:03:40.518751] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.279 [2024-07-20 19:03:40.527900] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.279 [2024-07-20 19:03:40.528444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.279 [2024-07-20 19:03:40.528488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.279 [2024-07-20 19:03:40.528506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.279 [2024-07-20 19:03:40.528745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.279 [2024-07-20 19:03:40.528994] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.279 [2024-07-20 19:03:40.529017] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.279 [2024-07-20 19:03:40.529031] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.279 [2024-07-20 19:03:40.532593] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.279 [2024-07-20 19:03:40.541878] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.279 [2024-07-20 19:03:40.542441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.279 [2024-07-20 19:03:40.542473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.279 [2024-07-20 19:03:40.542491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.279 [2024-07-20 19:03:40.542730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.279 [2024-07-20 19:03:40.542980] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.279 [2024-07-20 19:03:40.543002] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.279 [2024-07-20 19:03:40.543015] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.279 [2024-07-20 19:03:40.546575] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.279 [2024-07-20 19:03:40.555869] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.279 [2024-07-20 19:03:40.556424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.279 [2024-07-20 19:03:40.556453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.279 [2024-07-20 19:03:40.556468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.279 [2024-07-20 19:03:40.556728] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.279 [2024-07-20 19:03:40.556983] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.279 [2024-07-20 19:03:40.557008] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.279 [2024-07-20 19:03:40.557023] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.279 [2024-07-20 19:03:40.560598] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.279 [2024-07-20 19:03:40.569918] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.279 [2024-07-20 19:03:40.570452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.279 [2024-07-20 19:03:40.570500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.279 [2024-07-20 19:03:40.570517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.279 [2024-07-20 19:03:40.570747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.279 [2024-07-20 19:03:40.571002] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.279 [2024-07-20 19:03:40.571027] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.279 [2024-07-20 19:03:40.571042] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.279 [2024-07-20 19:03:40.574617] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.280 [2024-07-20 19:03:40.583906] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.280 [2024-07-20 19:03:40.584436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.280 [2024-07-20 19:03:40.584467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.280 [2024-07-20 19:03:40.584485] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.280 [2024-07-20 19:03:40.584723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.280 [2024-07-20 19:03:40.584978] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.280 [2024-07-20 19:03:40.585003] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.280 [2024-07-20 19:03:40.585018] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.280 [2024-07-20 19:03:40.588590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.538 [2024-07-20 19:03:40.597940] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.538 [2024-07-20 19:03:40.598455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.538 [2024-07-20 19:03:40.598489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.538 [2024-07-20 19:03:40.598508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.538 [2024-07-20 19:03:40.598748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.538 [2024-07-20 19:03:40.599002] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.538 [2024-07-20 19:03:40.599027] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.538 [2024-07-20 19:03:40.599043] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.538 [2024-07-20 19:03:40.602621] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.538 [2024-07-20 19:03:40.611839] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.538 [2024-07-20 19:03:40.612379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.538 [2024-07-20 19:03:40.612412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.538 [2024-07-20 19:03:40.612431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.538 [2024-07-20 19:03:40.612670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.538 [2024-07-20 19:03:40.612934] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.538 [2024-07-20 19:03:40.612960] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.538 [2024-07-20 19:03:40.612976] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.538 [2024-07-20 19:03:40.616554] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.538 [2024-07-20 19:03:40.625851] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.538 [2024-07-20 19:03:40.626469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.538 [2024-07-20 19:03:40.626523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.538 [2024-07-20 19:03:40.626541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.538 [2024-07-20 19:03:40.626779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.538 [2024-07-20 19:03:40.627036] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.538 [2024-07-20 19:03:40.627060] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.538 [2024-07-20 19:03:40.627076] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.538 [2024-07-20 19:03:40.630649] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.538 [2024-07-20 19:03:40.639726] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.538 [2024-07-20 19:03:40.640216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.538 [2024-07-20 19:03:40.640248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.538 [2024-07-20 19:03:40.640266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.538 [2024-07-20 19:03:40.640504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.538 [2024-07-20 19:03:40.640746] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.538 [2024-07-20 19:03:40.640770] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.538 [2024-07-20 19:03:40.640785] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.538 [2024-07-20 19:03:40.644376] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.538 [2024-07-20 19:03:40.653664] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.538 [2024-07-20 19:03:40.654330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.538 [2024-07-20 19:03:40.654395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.538 [2024-07-20 19:03:40.654413] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.538 [2024-07-20 19:03:40.654652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.538 [2024-07-20 19:03:40.654907] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.538 [2024-07-20 19:03:40.654932] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.538 [2024-07-20 19:03:40.654947] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.538 [2024-07-20 19:03:40.658526] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.538 [2024-07-20 19:03:40.667601] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.538 [2024-07-20 19:03:40.668136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.538 [2024-07-20 19:03:40.668186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.538 [2024-07-20 19:03:40.668205] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.538 [2024-07-20 19:03:40.668443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.538 [2024-07-20 19:03:40.668686] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.538 [2024-07-20 19:03:40.668710] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.538 [2024-07-20 19:03:40.668726] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.538 [2024-07-20 19:03:40.672313] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.538 [2024-07-20 19:03:40.681620] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.538 [2024-07-20 19:03:40.682158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.538 [2024-07-20 19:03:40.682189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.538 [2024-07-20 19:03:40.682208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.538 [2024-07-20 19:03:40.682446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.538 [2024-07-20 19:03:40.682690] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.538 [2024-07-20 19:03:40.682713] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.538 [2024-07-20 19:03:40.682729] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.538 [2024-07-20 19:03:40.686314] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.538 [2024-07-20 19:03:40.695606] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.538 [2024-07-20 19:03:40.696123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.538 [2024-07-20 19:03:40.696154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.538 [2024-07-20 19:03:40.696172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.538 [2024-07-20 19:03:40.696411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.538 [2024-07-20 19:03:40.696653] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.538 [2024-07-20 19:03:40.696677] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.538 [2024-07-20 19:03:40.696693] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.538 [2024-07-20 19:03:40.700280] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.538 [2024-07-20 19:03:40.709223] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.538 [2024-07-20 19:03:40.709708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.538 [2024-07-20 19:03:40.709736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.538 [2024-07-20 19:03:40.709758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.538 [2024-07-20 19:03:40.709992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.538 [2024-07-20 19:03:40.710217] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.538 [2024-07-20 19:03:40.710238] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.538 [2024-07-20 19:03:40.710251] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.538 [2024-07-20 19:03:40.713617] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.538 [2024-07-20 19:03:40.723184] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.538 [2024-07-20 19:03:40.723667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.538 [2024-07-20 19:03:40.723698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.538 [2024-07-20 19:03:40.723716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.538 [2024-07-20 19:03:40.723964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.538 [2024-07-20 19:03:40.724208] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.538 [2024-07-20 19:03:40.724231] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.538 [2024-07-20 19:03:40.724247] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.538 [2024-07-20 19:03:40.727822] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.538 [2024-07-20 19:03:40.737104] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.538 [2024-07-20 19:03:40.737608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.538 [2024-07-20 19:03:40.737639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.538 [2024-07-20 19:03:40.737657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.539 [2024-07-20 19:03:40.737908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.539 [2024-07-20 19:03:40.738152] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.539 [2024-07-20 19:03:40.738176] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.539 [2024-07-20 19:03:40.738192] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.539 [2024-07-20 19:03:40.741766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.539 [2024-07-20 19:03:40.751059] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.539 [2024-07-20 19:03:40.751584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.539 [2024-07-20 19:03:40.751615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.539 [2024-07-20 19:03:40.751632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.539 [2024-07-20 19:03:40.751885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.539 [2024-07-20 19:03:40.752129] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.539 [2024-07-20 19:03:40.752158] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.539 [2024-07-20 19:03:40.752175] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.539 [2024-07-20 19:03:40.755752] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.539 [2024-07-20 19:03:40.765041] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.539 [2024-07-20 19:03:40.765750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.539 [2024-07-20 19:03:40.765809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.539 [2024-07-20 19:03:40.765829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.539 [2024-07-20 19:03:40.766068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.539 [2024-07-20 19:03:40.766311] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.539 [2024-07-20 19:03:40.766335] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.539 [2024-07-20 19:03:40.766350] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.539 [2024-07-20 19:03:40.769936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.539 [2024-07-20 19:03:40.779007] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.539 [2024-07-20 19:03:40.779767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.539 [2024-07-20 19:03:40.779826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.539 [2024-07-20 19:03:40.779845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.539 [2024-07-20 19:03:40.780084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.539 [2024-07-20 19:03:40.780327] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.539 [2024-07-20 19:03:40.780350] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.539 [2024-07-20 19:03:40.780366] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.539 [2024-07-20 19:03:40.783953] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.539 [2024-07-20 19:03:40.793032] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.539 [2024-07-20 19:03:40.793731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.539 [2024-07-20 19:03:40.793781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.539 [2024-07-20 19:03:40.793809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.539 [2024-07-20 19:03:40.794049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.539 [2024-07-20 19:03:40.794292] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.539 [2024-07-20 19:03:40.794316] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.539 [2024-07-20 19:03:40.794332] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.539 [2024-07-20 19:03:40.797914] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.539 [2024-07-20 19:03:40.806995] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.539 [2024-07-20 19:03:40.807498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.539 [2024-07-20 19:03:40.807528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.539 [2024-07-20 19:03:40.807545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.539 [2024-07-20 19:03:40.807784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.539 [2024-07-20 19:03:40.808052] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.539 [2024-07-20 19:03:40.808076] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.539 [2024-07-20 19:03:40.808092] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.539 [2024-07-20 19:03:40.811665] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.539 [2024-07-20 19:03:40.820958] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.539 [2024-07-20 19:03:40.821486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.539 [2024-07-20 19:03:40.821516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.539 [2024-07-20 19:03:40.821534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.539 [2024-07-20 19:03:40.821772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.539 [2024-07-20 19:03:40.822026] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.539 [2024-07-20 19:03:40.822050] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.539 [2024-07-20 19:03:40.822066] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.539 [2024-07-20 19:03:40.825640] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.539 [2024-07-20 19:03:40.834930] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.539 [2024-07-20 19:03:40.835452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.539 [2024-07-20 19:03:40.835483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.539 [2024-07-20 19:03:40.835501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.539 [2024-07-20 19:03:40.835739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.539 [2024-07-20 19:03:40.835994] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.539 [2024-07-20 19:03:40.836018] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.539 [2024-07-20 19:03:40.836034] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.539 [2024-07-20 19:03:40.839610] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.539 [2024-07-20 19:03:40.848900] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.539 [2024-07-20 19:03:40.849402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.539 [2024-07-20 19:03:40.849433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.539 [2024-07-20 19:03:40.849451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.539 [2024-07-20 19:03:40.849695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.539 [2024-07-20 19:03:40.849952] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.539 [2024-07-20 19:03:40.849976] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.539 [2024-07-20 19:03:40.849992] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.539 [2024-07-20 19:03:40.853567] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.798 [2024-07-20 19:03:40.863091] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.798 [2024-07-20 19:03:40.863602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.798 [2024-07-20 19:03:40.863636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.798 [2024-07-20 19:03:40.863655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.798 [2024-07-20 19:03:40.863907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.798 [2024-07-20 19:03:40.864176] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.798 [2024-07-20 19:03:40.864204] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.798 [2024-07-20 19:03:40.864220] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.798 [2024-07-20 19:03:40.867883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.798 [2024-07-20 19:03:40.876966] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.798 [2024-07-20 19:03:40.877498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.798 [2024-07-20 19:03:40.877529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.798 [2024-07-20 19:03:40.877547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.798 [2024-07-20 19:03:40.877787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.798 [2024-07-20 19:03:40.878043] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.798 [2024-07-20 19:03:40.878067] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.798 [2024-07-20 19:03:40.878082] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.798 [2024-07-20 19:03:40.881660] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.798 [2024-07-20 19:03:40.890954] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.798 [2024-07-20 19:03:40.891554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.798 [2024-07-20 19:03:40.891585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.798 [2024-07-20 19:03:40.891603] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.798 [2024-07-20 19:03:40.891854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.798 [2024-07-20 19:03:40.892098] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.798 [2024-07-20 19:03:40.892122] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.798 [2024-07-20 19:03:40.892144] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.798 [2024-07-20 19:03:40.895719] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.798 [2024-07-20 19:03:40.904808] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.798 [2024-07-20 19:03:40.905422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.798 [2024-07-20 19:03:40.905472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.798 [2024-07-20 19:03:40.905490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.798 [2024-07-20 19:03:40.905729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.799 [2024-07-20 19:03:40.905984] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.799 [2024-07-20 19:03:40.906008] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.799 [2024-07-20 19:03:40.906024] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.799 [2024-07-20 19:03:40.909598] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.799 [2024-07-20 19:03:40.918675] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.799 [2024-07-20 19:03:40.919239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.799 [2024-07-20 19:03:40.919288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.799 [2024-07-20 19:03:40.919306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.799 [2024-07-20 19:03:40.919544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.799 [2024-07-20 19:03:40.919788] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.799 [2024-07-20 19:03:40.919824] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.799 [2024-07-20 19:03:40.919840] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.799 [2024-07-20 19:03:40.923414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.799 [2024-07-20 19:03:40.932695] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.799 [2024-07-20 19:03:40.933210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.799 [2024-07-20 19:03:40.933241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.799 [2024-07-20 19:03:40.933259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.799 [2024-07-20 19:03:40.933498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.799 [2024-07-20 19:03:40.933741] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.799 [2024-07-20 19:03:40.933765] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.799 [2024-07-20 19:03:40.933781] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.799 [2024-07-20 19:03:40.937365] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.799 [2024-07-20 19:03:40.946651] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.799 [2024-07-20 19:03:40.947176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.799 [2024-07-20 19:03:40.947209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.799 [2024-07-20 19:03:40.947227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.799 [2024-07-20 19:03:40.947466] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.799 [2024-07-20 19:03:40.947709] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.799 [2024-07-20 19:03:40.947733] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.799 [2024-07-20 19:03:40.947749] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.799 [2024-07-20 19:03:40.951336] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.799 [2024-07-20 19:03:40.960390] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.799 [2024-07-20 19:03:40.960920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.799 [2024-07-20 19:03:40.960952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.799 [2024-07-20 19:03:40.960970] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.799 [2024-07-20 19:03:40.961209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.799 [2024-07-20 19:03:40.961453] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.799 [2024-07-20 19:03:40.961476] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.799 [2024-07-20 19:03:40.961492] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.799 [2024-07-20 19:03:40.965074] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.799 [2024-07-20 19:03:40.974240] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.799 [2024-07-20 19:03:40.974953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.799 [2024-07-20 19:03:40.974985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.799 [2024-07-20 19:03:40.975004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.799 [2024-07-20 19:03:40.975243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.799 [2024-07-20 19:03:40.975485] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.799 [2024-07-20 19:03:40.975509] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.799 [2024-07-20 19:03:40.975524] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.799 [2024-07-20 19:03:40.979114] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.799 [2024-07-20 19:03:40.988199] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.799 [2024-07-20 19:03:40.988889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.799 [2024-07-20 19:03:40.988921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.799 [2024-07-20 19:03:40.988939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.799 [2024-07-20 19:03:40.989183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.799 [2024-07-20 19:03:40.989426] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.799 [2024-07-20 19:03:40.989451] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.799 [2024-07-20 19:03:40.989466] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.799 [2024-07-20 19:03:40.993067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.799 [2024-07-20 19:03:41.002164] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.799 [2024-07-20 19:03:41.002686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.799 [2024-07-20 19:03:41.002713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.799 [2024-07-20 19:03:41.002729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.799 [2024-07-20 19:03:41.002971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.799 [2024-07-20 19:03:41.003216] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.799 [2024-07-20 19:03:41.003239] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.799 [2024-07-20 19:03:41.003255] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.799 [2024-07-20 19:03:41.006839] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.799 [2024-07-20 19:03:41.016136] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.799 [2024-07-20 19:03:41.016651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.799 [2024-07-20 19:03:41.016682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.799 [2024-07-20 19:03:41.016700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.799 [2024-07-20 19:03:41.016951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.799 [2024-07-20 19:03:41.017194] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.799 [2024-07-20 19:03:41.017218] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.799 [2024-07-20 19:03:41.017234] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.799 [2024-07-20 19:03:41.020819] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.799 [2024-07-20 19:03:41.030107] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.799 [2024-07-20 19:03:41.030660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.799 [2024-07-20 19:03:41.030688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.799 [2024-07-20 19:03:41.030703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.799 [2024-07-20 19:03:41.030978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.799 [2024-07-20 19:03:41.031222] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.799 [2024-07-20 19:03:41.031246] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.799 [2024-07-20 19:03:41.031262] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.799 [2024-07-20 19:03:41.034847] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.799 [2024-07-20 19:03:41.044122] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.799 [2024-07-20 19:03:41.044658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.799 [2024-07-20 19:03:41.044702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.799 [2024-07-20 19:03:41.044718] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.799 [2024-07-20 19:03:41.044988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.799 [2024-07-20 19:03:41.045233] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.799 [2024-07-20 19:03:41.045257] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.799 [2024-07-20 19:03:41.045273] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.799 [2024-07-20 19:03:41.048849] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.799 [2024-07-20 19:03:41.058129] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.799 [2024-07-20 19:03:41.058631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.799 [2024-07-20 19:03:41.058661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.799 [2024-07-20 19:03:41.058679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.799 [2024-07-20 19:03:41.058928] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.799 [2024-07-20 19:03:41.059171] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.799 [2024-07-20 19:03:41.059195] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.799 [2024-07-20 19:03:41.059211] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.799 [2024-07-20 19:03:41.062785] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.799 [2024-07-20 19:03:41.072070] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.799 [2024-07-20 19:03:41.072572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.799 [2024-07-20 19:03:41.072604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.799 [2024-07-20 19:03:41.072622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.799 [2024-07-20 19:03:41.072872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.799 [2024-07-20 19:03:41.073116] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.799 [2024-07-20 19:03:41.073140] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.799 [2024-07-20 19:03:41.073156] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.799 [2024-07-20 19:03:41.076726] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.799 [2024-07-20 19:03:41.086014] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.799 [2024-07-20 19:03:41.086548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.799 [2024-07-20 19:03:41.086583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.799 [2024-07-20 19:03:41.086602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.799 [2024-07-20 19:03:41.086852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.799 [2024-07-20 19:03:41.087096] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.799 [2024-07-20 19:03:41.087119] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.799 [2024-07-20 19:03:41.087135] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.799 [2024-07-20 19:03:41.090711] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.799 [2024-07-20 19:03:41.099997] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.799 [2024-07-20 19:03:41.100532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.799 [2024-07-20 19:03:41.100577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.799 [2024-07-20 19:03:41.100594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.799 [2024-07-20 19:03:41.100835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.799 [2024-07-20 19:03:41.101096] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.799 [2024-07-20 19:03:41.101120] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.799 [2024-07-20 19:03:41.101136] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.800 [2024-07-20 19:03:41.104706] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.800 [2024-07-20 19:03:41.113993] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.800 [2024-07-20 19:03:41.114770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.800 [2024-07-20 19:03:41.114844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:30.800 [2024-07-20 19:03:41.114863] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:30.800 [2024-07-20 19:03:41.115101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:30.800 [2024-07-20 19:03:41.115344] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.800 [2024-07-20 19:03:41.115368] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.800 [2024-07-20 19:03:41.115384] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.800 [2024-07-20 19:03:41.119126] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.058 [2024-07-20 19:03:41.127975] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.058 [2024-07-20 19:03:41.128582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-07-20 19:03:41.128614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.058 [2024-07-20 19:03:41.128633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.058 [2024-07-20 19:03:41.128883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.058 [2024-07-20 19:03:41.129134] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.058 [2024-07-20 19:03:41.129158] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.058 [2024-07-20 19:03:41.129174] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.058 [2024-07-20 19:03:41.132748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.058 [2024-07-20 19:03:41.141822] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.058 [2024-07-20 19:03:41.142413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-07-20 19:03:41.142444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.059 [2024-07-20 19:03:41.142462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.059 [2024-07-20 19:03:41.142700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.059 [2024-07-20 19:03:41.142954] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.059 [2024-07-20 19:03:41.142978] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.059 [2024-07-20 19:03:41.142994] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.059 [2024-07-20 19:03:41.146565] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.059 [2024-07-20 19:03:41.155851] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.059 [2024-07-20 19:03:41.156394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-07-20 19:03:41.156437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.059 [2024-07-20 19:03:41.156453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.059 [2024-07-20 19:03:41.156708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.059 [2024-07-20 19:03:41.156963] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.059 [2024-07-20 19:03:41.156987] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.059 [2024-07-20 19:03:41.157003] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.059 [2024-07-20 19:03:41.160574] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.059 [2024-07-20 19:03:41.169857] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.059 [2024-07-20 19:03:41.170632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-07-20 19:03:41.170681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.059 [2024-07-20 19:03:41.170699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.059 [2024-07-20 19:03:41.170949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.059 [2024-07-20 19:03:41.171192] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.059 [2024-07-20 19:03:41.171216] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.059 [2024-07-20 19:03:41.171231] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.059 [2024-07-20 19:03:41.174810] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.059 [2024-07-20 19:03:41.183889] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.059 [2024-07-20 19:03:41.184425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-07-20 19:03:41.184456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.059 [2024-07-20 19:03:41.184474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.059 [2024-07-20 19:03:41.184712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.059 [2024-07-20 19:03:41.184965] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.059 [2024-07-20 19:03:41.184989] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.059 [2024-07-20 19:03:41.185005] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.059 [2024-07-20 19:03:41.188576] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.059 [2024-07-20 19:03:41.197871] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.059 [2024-07-20 19:03:41.198367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-07-20 19:03:41.198399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.059 [2024-07-20 19:03:41.198417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.059 [2024-07-20 19:03:41.198655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.059 [2024-07-20 19:03:41.198908] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.059 [2024-07-20 19:03:41.198933] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.059 [2024-07-20 19:03:41.198949] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.059 [2024-07-20 19:03:41.202522] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.059 [2024-07-20 19:03:41.211815] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.059 [2024-07-20 19:03:41.212348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-07-20 19:03:41.212379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.059 [2024-07-20 19:03:41.212398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.059 [2024-07-20 19:03:41.212637] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.059 [2024-07-20 19:03:41.212902] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.059 [2024-07-20 19:03:41.212924] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.059 [2024-07-20 19:03:41.212939] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.059 [2024-07-20 19:03:41.216537] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.059 [2024-07-20 19:03:41.225688] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.059 [2024-07-20 19:03:41.226192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-07-20 19:03:41.226223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.059 [2024-07-20 19:03:41.226249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.059 [2024-07-20 19:03:41.226495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.059 [2024-07-20 19:03:41.226750] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.059 [2024-07-20 19:03:41.226775] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.059 [2024-07-20 19:03:41.226791] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.059 [2024-07-20 19:03:41.230402] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.059 [2024-07-20 19:03:41.239558] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.059 [2024-07-20 19:03:41.240107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-07-20 19:03:41.240152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.059 [2024-07-20 19:03:41.240171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.059 [2024-07-20 19:03:41.240410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.059 [2024-07-20 19:03:41.240654] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.059 [2024-07-20 19:03:41.240678] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.059 [2024-07-20 19:03:41.240694] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.059 [2024-07-20 19:03:41.244215] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.059 [2024-07-20 19:03:41.253538] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.059 [2024-07-20 19:03:41.254036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-07-20 19:03:41.254064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.059 [2024-07-20 19:03:41.254080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.059 [2024-07-20 19:03:41.254333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.059 [2024-07-20 19:03:41.254576] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.059 [2024-07-20 19:03:41.254600] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.059 [2024-07-20 19:03:41.254616] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.059 [2024-07-20 19:03:41.258233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.059 [2024-07-20 19:03:41.267588] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.059 [2024-07-20 19:03:41.268114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-07-20 19:03:41.268141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.059 [2024-07-20 19:03:41.268156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.059 [2024-07-20 19:03:41.268421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.059 [2024-07-20 19:03:41.268665] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.059 [2024-07-20 19:03:41.268695] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.059 [2024-07-20 19:03:41.268711] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.059 [2024-07-20 19:03:41.272324] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.059 [2024-07-20 19:03:41.281490] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.059 [2024-07-20 19:03:41.282005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-07-20 19:03:41.282047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.059 [2024-07-20 19:03:41.282064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.059 [2024-07-20 19:03:41.282318] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.059 [2024-07-20 19:03:41.282562] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.059 [2024-07-20 19:03:41.282586] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.059 [2024-07-20 19:03:41.282602] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.059 [2024-07-20 19:03:41.286136] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.059 [2024-07-20 19:03:41.295357] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.059 [2024-07-20 19:03:41.295903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-07-20 19:03:41.295931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.059 [2024-07-20 19:03:41.295947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.059 [2024-07-20 19:03:41.296188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.059 [2024-07-20 19:03:41.296432] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.059 [2024-07-20 19:03:41.296456] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.059 [2024-07-20 19:03:41.296472] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.059 [2024-07-20 19:03:41.300056] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.059 [2024-07-20 19:03:41.309341] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.059 [2024-07-20 19:03:41.309841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-07-20 19:03:41.309871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.059 [2024-07-20 19:03:41.309890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.059 [2024-07-20 19:03:41.310128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.059 [2024-07-20 19:03:41.310371] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.059 [2024-07-20 19:03:41.310395] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.059 [2024-07-20 19:03:41.310411] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.059 [2024-07-20 19:03:41.313997] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.059 [2024-07-20 19:03:41.323281] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.059 [2024-07-20 19:03:41.323887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-07-20 19:03:41.323914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.059 [2024-07-20 19:03:41.323929] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.059 [2024-07-20 19:03:41.324188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.059 [2024-07-20 19:03:41.324440] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.059 [2024-07-20 19:03:41.324465] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.059 [2024-07-20 19:03:41.324481] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.059 [2024-07-20 19:03:41.328063] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.059 [2024-07-20 19:03:41.337146] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.059 [2024-07-20 19:03:41.337728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-07-20 19:03:41.337776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.059 [2024-07-20 19:03:41.337802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.059 [2024-07-20 19:03:41.338043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.059 [2024-07-20 19:03:41.338286] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.059 [2024-07-20 19:03:41.338310] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.059 [2024-07-20 19:03:41.338326] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.060 [2024-07-20 19:03:41.341903] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.060 [2024-07-20 19:03:41.351188] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.060 [2024-07-20 19:03:41.351716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-07-20 19:03:41.351748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.060 [2024-07-20 19:03:41.351766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.060 [2024-07-20 19:03:41.352015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.060 [2024-07-20 19:03:41.352258] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.060 [2024-07-20 19:03:41.352282] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.060 [2024-07-20 19:03:41.352298] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.060 [2024-07-20 19:03:41.355877] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.060 [2024-07-20 19:03:41.365158] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.060 [2024-07-20 19:03:41.365875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-07-20 19:03:41.365906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.060 [2024-07-20 19:03:41.365924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.060 [2024-07-20 19:03:41.366168] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.060 [2024-07-20 19:03:41.366412] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.060 [2024-07-20 19:03:41.366436] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.060 [2024-07-20 19:03:41.366452] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.060 [2024-07-20 19:03:41.370036] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.060 [2024-07-20 19:03:41.379195] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.060 [2024-07-20 19:03:41.379740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-07-20 19:03:41.379776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.060 [2024-07-20 19:03:41.379806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.060 [2024-07-20 19:03:41.380050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.060 [2024-07-20 19:03:41.380294] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.060 [2024-07-20 19:03:41.380318] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.060 [2024-07-20 19:03:41.380334] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.319 [2024-07-20 19:03:41.384001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.319 [2024-07-20 19:03:41.393195] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.319 [2024-07-20 19:03:41.393728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.319 [2024-07-20 19:03:41.393761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.319 [2024-07-20 19:03:41.393779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.319 [2024-07-20 19:03:41.394029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.319 [2024-07-20 19:03:41.394274] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.319 [2024-07-20 19:03:41.394298] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.319 [2024-07-20 19:03:41.394313] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.319 [2024-07-20 19:03:41.397901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.319 [2024-07-20 19:03:41.407193] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.319 [2024-07-20 19:03:41.407694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.319 [2024-07-20 19:03:41.407726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.319 [2024-07-20 19:03:41.407743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.319 [2024-07-20 19:03:41.407994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.319 [2024-07-20 19:03:41.408239] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.319 [2024-07-20 19:03:41.408262] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.319 [2024-07-20 19:03:41.408283] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.319 [2024-07-20 19:03:41.411864] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.319 [2024-07-20 19:03:41.421141] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.319 [2024-07-20 19:03:41.421661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.319 [2024-07-20 19:03:41.421692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.319 [2024-07-20 19:03:41.421710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.319 [2024-07-20 19:03:41.421962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.319 [2024-07-20 19:03:41.422206] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.319 [2024-07-20 19:03:41.422230] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.319 [2024-07-20 19:03:41.422245] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.319 [2024-07-20 19:03:41.425826] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.319 [2024-07-20 19:03:41.435103] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.319 [2024-07-20 19:03:41.435649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.319 [2024-07-20 19:03:41.435680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.319 [2024-07-20 19:03:41.435697] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.319 [2024-07-20 19:03:41.435947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.319 [2024-07-20 19:03:41.436191] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.319 [2024-07-20 19:03:41.436215] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.319 [2024-07-20 19:03:41.436231] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.319 [2024-07-20 19:03:41.439811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.319 [2024-07-20 19:03:41.449084] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.319 [2024-07-20 19:03:41.449608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.319 [2024-07-20 19:03:41.449639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.319 [2024-07-20 19:03:41.449657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.319 [2024-07-20 19:03:41.449906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.319 [2024-07-20 19:03:41.450150] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.319 [2024-07-20 19:03:41.450174] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.319 [2024-07-20 19:03:41.450189] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.319 [2024-07-20 19:03:41.453763] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.319 [2024-07-20 19:03:41.462907] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.319 [2024-07-20 19:03:41.463406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.319 [2024-07-20 19:03:41.463438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.319 [2024-07-20 19:03:41.463455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.319 [2024-07-20 19:03:41.463694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.319 [2024-07-20 19:03:41.463947] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.319 [2024-07-20 19:03:41.463971] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.319 [2024-07-20 19:03:41.463987] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.319 [2024-07-20 19:03:41.467556] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.319 [2024-07-20 19:03:41.476839] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.319 [2024-07-20 19:03:41.477372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.319 [2024-07-20 19:03:41.477415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.319 [2024-07-20 19:03:41.477431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.319 [2024-07-20 19:03:41.477690] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.319 [2024-07-20 19:03:41.477943] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.319 [2024-07-20 19:03:41.477968] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.319 [2024-07-20 19:03:41.477983] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.319 [2024-07-20 19:03:41.481555] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.319 [2024-07-20 19:03:41.490763] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.319 [2024-07-20 19:03:41.491388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.319 [2024-07-20 19:03:41.491419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.319 [2024-07-20 19:03:41.491437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.319 [2024-07-20 19:03:41.491676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.319 [2024-07-20 19:03:41.491931] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.319 [2024-07-20 19:03:41.491955] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.319 [2024-07-20 19:03:41.491971] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.319 [2024-07-20 19:03:41.495540] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.319 [2024-07-20 19:03:41.504603] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.319 [2024-07-20 19:03:41.505401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.319 [2024-07-20 19:03:41.505452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.319 [2024-07-20 19:03:41.505471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.319 [2024-07-20 19:03:41.505709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.319 [2024-07-20 19:03:41.505968] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.319 [2024-07-20 19:03:41.505992] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.319 [2024-07-20 19:03:41.506007] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.319 [2024-07-20 19:03:41.509577] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.319 [2024-07-20 19:03:41.518648] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.319 [2024-07-20 19:03:41.519144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.319 [2024-07-20 19:03:41.519172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.319 [2024-07-20 19:03:41.519188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.319 [2024-07-20 19:03:41.519423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.319 [2024-07-20 19:03:41.519639] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.319 [2024-07-20 19:03:41.519659] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.319 [2024-07-20 19:03:41.519672] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.319 [2024-07-20 19:03:41.523277] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.319 [2024-07-20 19:03:41.532716] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.319 [2024-07-20 19:03:41.533209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.319 [2024-07-20 19:03:41.533250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.319 [2024-07-20 19:03:41.533266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.319 [2024-07-20 19:03:41.533492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.319 [2024-07-20 19:03:41.533741] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.319 [2024-07-20 19:03:41.533765] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.319 [2024-07-20 19:03:41.533781] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.319 [2024-07-20 19:03:41.537393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.319 [2024-07-20 19:03:41.546718] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.319 [2024-07-20 19:03:41.547260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.319 [2024-07-20 19:03:41.547291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.319 [2024-07-20 19:03:41.547309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.319 [2024-07-20 19:03:41.547547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.319 [2024-07-20 19:03:41.547800] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.319 [2024-07-20 19:03:41.547839] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.319 [2024-07-20 19:03:41.547854] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.319 [2024-07-20 19:03:41.551420] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.319 [2024-07-20 19:03:41.560656] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.319 [2024-07-20 19:03:41.561171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.319 [2024-07-20 19:03:41.561202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.319 [2024-07-20 19:03:41.561220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.319 [2024-07-20 19:03:41.561459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.319 [2024-07-20 19:03:41.561701] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.319 [2024-07-20 19:03:41.561725] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.319 [2024-07-20 19:03:41.561741] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.319 [2024-07-20 19:03:41.565324] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.319 [2024-07-20 19:03:41.574611] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.319 [2024-07-20 19:03:41.575154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.319 [2024-07-20 19:03:41.575185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.320 [2024-07-20 19:03:41.575203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.320 [2024-07-20 19:03:41.575441] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.320 [2024-07-20 19:03:41.575685] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.320 [2024-07-20 19:03:41.575709] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.320 [2024-07-20 19:03:41.575724] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.320 [2024-07-20 19:03:41.579306] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.320 [2024-07-20 19:03:41.588591] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.320 [2024-07-20 19:03:41.589122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.320 [2024-07-20 19:03:41.589153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.320 [2024-07-20 19:03:41.589171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.320 [2024-07-20 19:03:41.589409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.320 [2024-07-20 19:03:41.589652] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.320 [2024-07-20 19:03:41.589676] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.320 [2024-07-20 19:03:41.589691] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.320 [2024-07-20 19:03:41.593275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.320 [2024-07-20 19:03:41.602554] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.320 [2024-07-20 19:03:41.603033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.320 [2024-07-20 19:03:41.603063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.320 [2024-07-20 19:03:41.603086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.320 [2024-07-20 19:03:41.603325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.320 [2024-07-20 19:03:41.603569] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.320 [2024-07-20 19:03:41.603592] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.320 [2024-07-20 19:03:41.603608] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.320 [2024-07-20 19:03:41.607191] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.320 [2024-07-20 19:03:41.616485] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.320 [2024-07-20 19:03:41.616999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.320 [2024-07-20 19:03:41.617030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.320 [2024-07-20 19:03:41.617048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.320 [2024-07-20 19:03:41.617286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.320 [2024-07-20 19:03:41.617530] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.320 [2024-07-20 19:03:41.617553] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.320 [2024-07-20 19:03:41.617569] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.320 [2024-07-20 19:03:41.621164] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.320 [2024-07-20 19:03:41.630448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.320 [2024-07-20 19:03:41.630963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.320 [2024-07-20 19:03:41.630994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.320 [2024-07-20 19:03:41.631012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.320 [2024-07-20 19:03:41.631252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.320 [2024-07-20 19:03:41.631495] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.320 [2024-07-20 19:03:41.631519] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.320 [2024-07-20 19:03:41.631534] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.320 [2024-07-20 19:03:41.635115] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.578 [2024-07-20 19:03:41.644438] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.578 [2024-07-20 19:03:41.644990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.578 [2024-07-20 19:03:41.645025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.578 [2024-07-20 19:03:41.645045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.578 [2024-07-20 19:03:41.645285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.578 [2024-07-20 19:03:41.645545] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.578 [2024-07-20 19:03:41.645571] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.578 [2024-07-20 19:03:41.645587] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.578 [2024-07-20 19:03:41.649249] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.578 [2024-07-20 19:03:41.658329] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.578 [2024-07-20 19:03:41.658835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.578 [2024-07-20 19:03:41.658867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.578 [2024-07-20 19:03:41.658886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.578 [2024-07-20 19:03:41.659126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.578 [2024-07-20 19:03:41.659369] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.578 [2024-07-20 19:03:41.659393] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.578 [2024-07-20 19:03:41.659409] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.578 [2024-07-20 19:03:41.662992] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.578 [2024-07-20 19:03:41.672277] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.578 [2024-07-20 19:03:41.672843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.578 [2024-07-20 19:03:41.672876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.578 [2024-07-20 19:03:41.672894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.579 [2024-07-20 19:03:41.673134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.579 [2024-07-20 19:03:41.673378] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.579 [2024-07-20 19:03:41.673401] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.579 [2024-07-20 19:03:41.673417] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.579 [2024-07-20 19:03:41.677045] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.579 [2024-07-20 19:03:41.686149] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.579 [2024-07-20 19:03:41.686663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.579 [2024-07-20 19:03:41.686695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.579 [2024-07-20 19:03:41.686714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.579 [2024-07-20 19:03:41.686965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.579 [2024-07-20 19:03:41.687209] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.579 [2024-07-20 19:03:41.687233] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.579 [2024-07-20 19:03:41.687248] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.579 [2024-07-20 19:03:41.690835] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.579 [2024-07-20 19:03:41.700126] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.579 [2024-07-20 19:03:41.700652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.579 [2024-07-20 19:03:41.700683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.579 [2024-07-20 19:03:41.700701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.579 [2024-07-20 19:03:41.700950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.579 [2024-07-20 19:03:41.701194] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.579 [2024-07-20 19:03:41.701218] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.579 [2024-07-20 19:03:41.701233] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.579 [2024-07-20 19:03:41.704811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.579 [2024-07-20 19:03:41.714043] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.579 [2024-07-20 19:03:41.714640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.579 [2024-07-20 19:03:41.714671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.579 [2024-07-20 19:03:41.714689] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.579 [2024-07-20 19:03:41.714938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.579 [2024-07-20 19:03:41.715182] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.579 [2024-07-20 19:03:41.715206] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.579 [2024-07-20 19:03:41.715222] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.579 [2024-07-20 19:03:41.718799] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.579 [2024-07-20 19:03:41.728078] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.579 [2024-07-20 19:03:41.728777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.579 [2024-07-20 19:03:41.728832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.579 [2024-07-20 19:03:41.728851] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.579 [2024-07-20 19:03:41.729090] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.579 [2024-07-20 19:03:41.729334] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.579 [2024-07-20 19:03:41.729357] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.579 [2024-07-20 19:03:41.729373] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.579 [2024-07-20 19:03:41.732957] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.579 [2024-07-20 19:03:41.742027] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.579 [2024-07-20 19:03:41.742532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.579 [2024-07-20 19:03:41.742564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.579 [2024-07-20 19:03:41.742587] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.579 [2024-07-20 19:03:41.742839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.579 [2024-07-20 19:03:41.743083] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.579 [2024-07-20 19:03:41.743107] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.579 [2024-07-20 19:03:41.743123] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.579 [2024-07-20 19:03:41.746694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.579 [2024-07-20 19:03:41.756003] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.579 [2024-07-20 19:03:41.756529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.579 [2024-07-20 19:03:41.756560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.579 [2024-07-20 19:03:41.756578] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.579 [2024-07-20 19:03:41.756827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.579 [2024-07-20 19:03:41.757070] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.579 [2024-07-20 19:03:41.757094] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.579 [2024-07-20 19:03:41.757109] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.579 [2024-07-20 19:03:41.760681] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.579 [2024-07-20 19:03:41.769967] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.579 [2024-07-20 19:03:41.770496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.579 [2024-07-20 19:03:41.770527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.579 [2024-07-20 19:03:41.770545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.579 [2024-07-20 19:03:41.770783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.579 [2024-07-20 19:03:41.771045] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.579 [2024-07-20 19:03:41.771068] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.579 [2024-07-20 19:03:41.771084] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.579 [2024-07-20 19:03:41.774655] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.579 [2024-07-20 19:03:41.783942] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.579 [2024-07-20 19:03:41.784467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.579 [2024-07-20 19:03:41.784497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.579 [2024-07-20 19:03:41.784515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.579 [2024-07-20 19:03:41.784753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.579 [2024-07-20 19:03:41.785006] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.579 [2024-07-20 19:03:41.785036] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.579 [2024-07-20 19:03:41.785052] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.579 [2024-07-20 19:03:41.788626] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.579 [2024-07-20 19:03:41.797919] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.579 [2024-07-20 19:03:41.798637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.579 [2024-07-20 19:03:41.798684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.579 [2024-07-20 19:03:41.798702] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.579 [2024-07-20 19:03:41.798953] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.579 [2024-07-20 19:03:41.799202] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.579 [2024-07-20 19:03:41.799226] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.579 [2024-07-20 19:03:41.799242] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.579 [2024-07-20 19:03:41.802821] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.579 [2024-07-20 19:03:41.811889] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.579 [2024-07-20 19:03:41.812408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.579 [2024-07-20 19:03:41.812438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.579 [2024-07-20 19:03:41.812456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.579 [2024-07-20 19:03:41.812694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.579 [2024-07-20 19:03:41.812948] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.579 [2024-07-20 19:03:41.812972] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.579 [2024-07-20 19:03:41.812988] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.579 [2024-07-20 19:03:41.816563] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.579 [2024-07-20 19:03:41.825846] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.579 [2024-07-20 19:03:41.826374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.579 [2024-07-20 19:03:41.826405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.579 [2024-07-20 19:03:41.826422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.579 [2024-07-20 19:03:41.826661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.579 [2024-07-20 19:03:41.826915] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.579 [2024-07-20 19:03:41.826939] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.579 [2024-07-20 19:03:41.826954] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.579 [2024-07-20 19:03:41.830525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.579 [2024-07-20 19:03:41.839810] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.579 [2024-07-20 19:03:41.840393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.579 [2024-07-20 19:03:41.840420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.579 [2024-07-20 19:03:41.840435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.579 [2024-07-20 19:03:41.840677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.579 [2024-07-20 19:03:41.840932] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.579 [2024-07-20 19:03:41.840957] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.579 [2024-07-20 19:03:41.840972] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.579 [2024-07-20 19:03:41.844542] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.579 [2024-07-20 19:03:41.853826] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.579 [2024-07-20 19:03:41.854607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.579 [2024-07-20 19:03:41.854656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.579 [2024-07-20 19:03:41.854673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.579 [2024-07-20 19:03:41.854923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.579 [2024-07-20 19:03:41.855166] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.579 [2024-07-20 19:03:41.855191] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.579 [2024-07-20 19:03:41.855206] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.579 [2024-07-20 19:03:41.858784] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.579 [2024-07-20 19:03:41.867858] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.579 [2024-07-20 19:03:41.868477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.579 [2024-07-20 19:03:41.868526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.579 [2024-07-20 19:03:41.868544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.579 [2024-07-20 19:03:41.868782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.579 [2024-07-20 19:03:41.869046] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.579 [2024-07-20 19:03:41.869070] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.579 [2024-07-20 19:03:41.869085] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.579 [2024-07-20 19:03:41.872656] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.579 [2024-07-20 19:03:41.881723] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.580 [2024-07-20 19:03:41.882231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.580 [2024-07-20 19:03:41.882262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.580 [2024-07-20 19:03:41.882280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.580 [2024-07-20 19:03:41.882523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.580 [2024-07-20 19:03:41.882766] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.580 [2024-07-20 19:03:41.882790] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.580 [2024-07-20 19:03:41.882817] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.580 [2024-07-20 19:03:41.886390] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.580 [2024-07-20 19:03:41.895670] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.580 [2024-07-20 19:03:41.896174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.580 [2024-07-20 19:03:41.896205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.580 [2024-07-20 19:03:41.896223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.580 [2024-07-20 19:03:41.896461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.580 [2024-07-20 19:03:41.896704] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.580 [2024-07-20 19:03:41.896727] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.580 [2024-07-20 19:03:41.896743] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.580 [2024-07-20 19:03:41.900473] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.838 [2024-07-20 19:03:41.909510] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.838 [2024-07-20 19:03:41.910010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.838 [2024-07-20 19:03:41.910044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.838 [2024-07-20 19:03:41.910063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.838 [2024-07-20 19:03:41.910302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.838 [2024-07-20 19:03:41.910545] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.838 [2024-07-20 19:03:41.910569] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.838 [2024-07-20 19:03:41.910585] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.838 [2024-07-20 19:03:41.914170] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.838 [2024-07-20 19:03:41.923448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.838 [2024-07-20 19:03:41.923951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.838 [2024-07-20 19:03:41.923982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.838 [2024-07-20 19:03:41.924000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.838 [2024-07-20 19:03:41.924239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.838 [2024-07-20 19:03:41.924482] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.838 [2024-07-20 19:03:41.924506] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.838 [2024-07-20 19:03:41.924530] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.838 [2024-07-20 19:03:41.928117] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.838 [2024-07-20 19:03:41.937395] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.839 [2024-07-20 19:03:41.937982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.839 [2024-07-20 19:03:41.938015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.839 [2024-07-20 19:03:41.938034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.839 [2024-07-20 19:03:41.938273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.839 [2024-07-20 19:03:41.938517] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.839 [2024-07-20 19:03:41.938541] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.839 [2024-07-20 19:03:41.938557] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.839 [2024-07-20 19:03:41.942149] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.839 [2024-07-20 19:03:41.951436] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.839 [2024-07-20 19:03:41.951969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.839 [2024-07-20 19:03:41.952009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.839 [2024-07-20 19:03:41.952028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.839 [2024-07-20 19:03:41.952267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.839 [2024-07-20 19:03:41.952510] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.839 [2024-07-20 19:03:41.952534] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.839 [2024-07-20 19:03:41.952549] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.839 [2024-07-20 19:03:41.956135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.839 [2024-07-20 19:03:41.965503] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.839 [2024-07-20 19:03:41.966027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.839 [2024-07-20 19:03:41.966058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.839 [2024-07-20 19:03:41.966077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.839 [2024-07-20 19:03:41.966315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.839 [2024-07-20 19:03:41.966559] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.839 [2024-07-20 19:03:41.966583] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.839 [2024-07-20 19:03:41.966599] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.839 [2024-07-20 19:03:41.970185] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.839 [2024-07-20 19:03:41.979465] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.839 [2024-07-20 19:03:41.980000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.839 [2024-07-20 19:03:41.980037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.839 [2024-07-20 19:03:41.980056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.839 [2024-07-20 19:03:41.980294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.839 [2024-07-20 19:03:41.980538] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.839 [2024-07-20 19:03:41.980561] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.839 [2024-07-20 19:03:41.980578] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.839 [2024-07-20 19:03:41.984160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.839 [2024-07-20 19:03:41.993511] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.839 [2024-07-20 19:03:41.994018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.839 [2024-07-20 19:03:41.994049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.839 [2024-07-20 19:03:41.994068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.839 [2024-07-20 19:03:41.994306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.839 [2024-07-20 19:03:41.994549] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.839 [2024-07-20 19:03:41.994573] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.839 [2024-07-20 19:03:41.994589] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.839 [2024-07-20 19:03:41.998171] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.839 [2024-07-20 19:03:42.007448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.839 [2024-07-20 19:03:42.007957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.839 [2024-07-20 19:03:42.007989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.839 [2024-07-20 19:03:42.008007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.839 [2024-07-20 19:03:42.008246] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.839 [2024-07-20 19:03:42.008489] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.839 [2024-07-20 19:03:42.008513] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.839 [2024-07-20 19:03:42.008529] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.839 [2024-07-20 19:03:42.012112] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.839 [2024-07-20 19:03:42.021386] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.839 [2024-07-20 19:03:42.021910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.839 [2024-07-20 19:03:42.021941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.839 [2024-07-20 19:03:42.021959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.839 [2024-07-20 19:03:42.022197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.839 [2024-07-20 19:03:42.022447] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.839 [2024-07-20 19:03:42.022471] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.839 [2024-07-20 19:03:42.022487] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.839 [2024-07-20 19:03:42.026068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.839 [2024-07-20 19:03:42.035348] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.839 [2024-07-20 19:03:42.035873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.839 [2024-07-20 19:03:42.035904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.839 [2024-07-20 19:03:42.035922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.839 [2024-07-20 19:03:42.036160] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.839 [2024-07-20 19:03:42.036404] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.839 [2024-07-20 19:03:42.036428] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.839 [2024-07-20 19:03:42.036444] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.839 [2024-07-20 19:03:42.040026] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.839 [2024-07-20 19:03:42.049303] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.839 [2024-07-20 19:03:42.049820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.839 [2024-07-20 19:03:42.049847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.839 [2024-07-20 19:03:42.049862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.839 [2024-07-20 19:03:42.050120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.839 [2024-07-20 19:03:42.050364] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.839 [2024-07-20 19:03:42.050388] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.839 [2024-07-20 19:03:42.050404] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.839 [2024-07-20 19:03:42.053985] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.839 [2024-07-20 19:03:42.063261] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.839 [2024-07-20 19:03:42.063743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.839 [2024-07-20 19:03:42.063770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.839 [2024-07-20 19:03:42.063819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.839 [2024-07-20 19:03:42.064078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.839 [2024-07-20 19:03:42.064321] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.839 [2024-07-20 19:03:42.064345] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.839 [2024-07-20 19:03:42.064361] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.839 [2024-07-20 19:03:42.067948] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.839 [2024-07-20 19:03:42.077227] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.839 [2024-07-20 19:03:42.077732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.839 [2024-07-20 19:03:42.077763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.839 [2024-07-20 19:03:42.077781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.839 [2024-07-20 19:03:42.078030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.839 [2024-07-20 19:03:42.078274] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.839 [2024-07-20 19:03:42.078298] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.839 [2024-07-20 19:03:42.078313] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.839 [2024-07-20 19:03:42.081893] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.839 [2024-07-20 19:03:42.091177] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.839 [2024-07-20 19:03:42.091675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.839 [2024-07-20 19:03:42.091706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.839 [2024-07-20 19:03:42.091724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.839 [2024-07-20 19:03:42.091974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.839 [2024-07-20 19:03:42.092218] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.839 [2024-07-20 19:03:42.092242] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.839 [2024-07-20 19:03:42.092258] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.839 [2024-07-20 19:03:42.095835] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.839 [2024-07-20 19:03:42.105129] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.839 [2024-07-20 19:03:42.105645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.839 [2024-07-20 19:03:42.105675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.839 [2024-07-20 19:03:42.105693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.839 [2024-07-20 19:03:42.105942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.839 [2024-07-20 19:03:42.106186] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.839 [2024-07-20 19:03:42.106210] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.839 [2024-07-20 19:03:42.106226] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.839 [2024-07-20 19:03:42.109817] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.839 [2024-07-20 19:03:42.119109] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.839 [2024-07-20 19:03:42.119617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.839 [2024-07-20 19:03:42.119648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.839 [2024-07-20 19:03:42.119671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.839 [2024-07-20 19:03:42.119923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.839 [2024-07-20 19:03:42.120166] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.839 [2024-07-20 19:03:42.120190] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.839 [2024-07-20 19:03:42.120206] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.839 [2024-07-20 19:03:42.123782] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.839 [2024-07-20 19:03:42.133072] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.839 [2024-07-20 19:03:42.133600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.839 [2024-07-20 19:03:42.133630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.839 [2024-07-20 19:03:42.133648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.839 [2024-07-20 19:03:42.133899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.839 [2024-07-20 19:03:42.134143] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.839 [2024-07-20 19:03:42.134167] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.839 [2024-07-20 19:03:42.134182] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.840 [2024-07-20 19:03:42.137759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.840 [2024-07-20 19:03:42.147054] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.840 [2024-07-20 19:03:42.147574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.840 [2024-07-20 19:03:42.147605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:31.840 [2024-07-20 19:03:42.147623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:31.840 [2024-07-20 19:03:42.147873] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:31.840 [2024-07-20 19:03:42.148117] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.840 [2024-07-20 19:03:42.148141] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.840 [2024-07-20 19:03:42.148157] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.840 [2024-07-20 19:03:42.151731] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.099 [2024-07-20 19:03:42.161253] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.099 [2024-07-20 19:03:42.161803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.099 [2024-07-20 19:03:42.161837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.099 [2024-07-20 19:03:42.161856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.099 [2024-07-20 19:03:42.162095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.099 [2024-07-20 19:03:42.162338] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.099 [2024-07-20 19:03:42.162368] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.099 [2024-07-20 19:03:42.162384] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.099 [2024-07-20 19:03:42.166074] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.099 [2024-07-20 19:03:42.175159] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.099 [2024-07-20 19:03:42.175899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.099 [2024-07-20 19:03:42.175989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.099 [2024-07-20 19:03:42.176008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.099 [2024-07-20 19:03:42.176248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.099 [2024-07-20 19:03:42.176491] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.099 [2024-07-20 19:03:42.176515] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.099 [2024-07-20 19:03:42.176531] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.099 [2024-07-20 19:03:42.180121] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.099 [2024-07-20 19:03:42.189201] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.099 [2024-07-20 19:03:42.189725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.099 [2024-07-20 19:03:42.189756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.099 [2024-07-20 19:03:42.189774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.099 [2024-07-20 19:03:42.190029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.099 [2024-07-20 19:03:42.190273] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.099 [2024-07-20 19:03:42.190297] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.099 [2024-07-20 19:03:42.190312] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.099 [2024-07-20 19:03:42.193896] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.099 [2024-07-20 19:03:42.203179] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.099 [2024-07-20 19:03:42.203699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.099 [2024-07-20 19:03:42.203730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.099 [2024-07-20 19:03:42.203748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.099 [2024-07-20 19:03:42.203998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.099 [2024-07-20 19:03:42.204243] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.099 [2024-07-20 19:03:42.204266] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.099 [2024-07-20 19:03:42.204282] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.099 [2024-07-20 19:03:42.207867] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.099 [2024-07-20 19:03:42.216973] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.099 [2024-07-20 19:03:42.217508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.099 [2024-07-20 19:03:42.217539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.099 [2024-07-20 19:03:42.217557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.099 [2024-07-20 19:03:42.217807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.099 [2024-07-20 19:03:42.218051] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.099 [2024-07-20 19:03:42.218075] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.099 [2024-07-20 19:03:42.218091] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.099 [2024-07-20 19:03:42.221666] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.099 [2024-07-20 19:03:42.230964] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.099 [2024-07-20 19:03:42.231489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.099 [2024-07-20 19:03:42.231519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.099 [2024-07-20 19:03:42.231537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.099 [2024-07-20 19:03:42.231775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.099 [2024-07-20 19:03:42.232030] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.099 [2024-07-20 19:03:42.232054] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.099 [2024-07-20 19:03:42.232070] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.099 [2024-07-20 19:03:42.235645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.099 [2024-07-20 19:03:42.244939] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.099 [2024-07-20 19:03:42.245469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.099 [2024-07-20 19:03:42.245500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.099 [2024-07-20 19:03:42.245518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.099 [2024-07-20 19:03:42.245756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.099 [2024-07-20 19:03:42.246012] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.099 [2024-07-20 19:03:42.246036] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.099 [2024-07-20 19:03:42.246051] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.099 [2024-07-20 19:03:42.249629] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.099 [2024-07-20 19:03:42.258927] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.099 [2024-07-20 19:03:42.259448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.099 [2024-07-20 19:03:42.259480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.099 [2024-07-20 19:03:42.259503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.099 [2024-07-20 19:03:42.259743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.099 [2024-07-20 19:03:42.260000] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.099 [2024-07-20 19:03:42.260026] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.099 [2024-07-20 19:03:42.260043] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.099 [2024-07-20 19:03:42.263618] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.099 [2024-07-20 19:03:42.272918] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.099 [2024-07-20 19:03:42.273455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.099 [2024-07-20 19:03:42.273487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.099 [2024-07-20 19:03:42.273505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.099 [2024-07-20 19:03:42.273744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.099 [2024-07-20 19:03:42.274002] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.099 [2024-07-20 19:03:42.274027] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.099 [2024-07-20 19:03:42.274044] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.099 [2024-07-20 19:03:42.277619] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.099 [2024-07-20 19:03:42.286913] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.099 [2024-07-20 19:03:42.287450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.099 [2024-07-20 19:03:42.287482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.099 [2024-07-20 19:03:42.287500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.099 [2024-07-20 19:03:42.287738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.099 [2024-07-20 19:03:42.287996] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.099 [2024-07-20 19:03:42.288022] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.099 [2024-07-20 19:03:42.288038] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.099 [2024-07-20 19:03:42.291622] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.099 [2024-07-20 19:03:42.300918] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.099 [2024-07-20 19:03:42.301682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.100 [2024-07-20 19:03:42.301734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.100 [2024-07-20 19:03:42.301752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.100 [2024-07-20 19:03:42.302003] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.100 [2024-07-20 19:03:42.302246] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.100 [2024-07-20 19:03:42.302270] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.100 [2024-07-20 19:03:42.302293] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.100 [2024-07-20 19:03:42.305878] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.100 [2024-07-20 19:03:42.314958] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.100 [2024-07-20 19:03:42.315744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.100 [2024-07-20 19:03:42.315806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.100 [2024-07-20 19:03:42.315827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.100 [2024-07-20 19:03:42.316066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.100 [2024-07-20 19:03:42.316309] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.100 [2024-07-20 19:03:42.316335] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.100 [2024-07-20 19:03:42.316352] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.100 [2024-07-20 19:03:42.319941] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.100 [2024-07-20 19:03:42.328822] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.100 [2024-07-20 19:03:42.329397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.100 [2024-07-20 19:03:42.329439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.100 [2024-07-20 19:03:42.329456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.100 [2024-07-20 19:03:42.329708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.100 [2024-07-20 19:03:42.329963] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.100 [2024-07-20 19:03:42.329989] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.100 [2024-07-20 19:03:42.330005] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.100 [2024-07-20 19:03:42.333582] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.100 [2024-07-20 19:03:42.342668] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.100 [2024-07-20 19:03:42.343214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.100 [2024-07-20 19:03:42.343245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.100 [2024-07-20 19:03:42.343264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.100 [2024-07-20 19:03:42.343502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.100 [2024-07-20 19:03:42.343748] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.100 [2024-07-20 19:03:42.343774] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.100 [2024-07-20 19:03:42.343790] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.100 [2024-07-20 19:03:42.347383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.100 [2024-07-20 19:03:42.356685] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.100 [2024-07-20 19:03:42.357222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.100 [2024-07-20 19:03:42.357259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.100 [2024-07-20 19:03:42.357278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.100 [2024-07-20 19:03:42.357518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.100 [2024-07-20 19:03:42.357763] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.100 [2024-07-20 19:03:42.357789] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.100 [2024-07-20 19:03:42.357817] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.100 [2024-07-20 19:03:42.361396] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.100 [2024-07-20 19:03:42.370696] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.100 [2024-07-20 19:03:42.371243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.100 [2024-07-20 19:03:42.371274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.100 [2024-07-20 19:03:42.371293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.100 [2024-07-20 19:03:42.371532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.100 [2024-07-20 19:03:42.371776] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.100 [2024-07-20 19:03:42.371815] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.100 [2024-07-20 19:03:42.371833] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.100 [2024-07-20 19:03:42.375415] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.100 [2024-07-20 19:03:42.384715] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.100 [2024-07-20 19:03:42.385255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.100 [2024-07-20 19:03:42.385288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.100 [2024-07-20 19:03:42.385306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.100 [2024-07-20 19:03:42.385556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.100 [2024-07-20 19:03:42.385812] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.100 [2024-07-20 19:03:42.385837] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.100 [2024-07-20 19:03:42.385853] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.100 [2024-07-20 19:03:42.389435] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.100 [2024-07-20 19:03:42.398742] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.100 [2024-07-20 19:03:42.399256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.100 [2024-07-20 19:03:42.399288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.100 [2024-07-20 19:03:42.399307] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.100 [2024-07-20 19:03:42.399551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.100 [2024-07-20 19:03:42.399808] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.100 [2024-07-20 19:03:42.399834] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.100 [2024-07-20 19:03:42.399850] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.100 [2024-07-20 19:03:42.403427] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.100 [2024-07-20 19:03:42.412731] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.100 [2024-07-20 19:03:42.413258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.100 [2024-07-20 19:03:42.413290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.100 [2024-07-20 19:03:42.413308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.100 [2024-07-20 19:03:42.413547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.100 [2024-07-20 19:03:42.413791] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.100 [2024-07-20 19:03:42.413826] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.100 [2024-07-20 19:03:42.413843] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.100 [2024-07-20 19:03:42.417474] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.374 [2024-07-20 19:03:42.426868] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.374 [2024-07-20 19:03:42.427669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-07-20 19:03:42.427723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.374 [2024-07-20 19:03:42.427742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.374 [2024-07-20 19:03:42.427997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.374 [2024-07-20 19:03:42.428243] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.374 [2024-07-20 19:03:42.428268] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.374 [2024-07-20 19:03:42.428284] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.374 [2024-07-20 19:03:42.431876] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.374 [2024-07-20 19:03:42.440785] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.374 [2024-07-20 19:03:42.441399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-07-20 19:03:42.441432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.374 [2024-07-20 19:03:42.441451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.374 [2024-07-20 19:03:42.441691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.374 [2024-07-20 19:03:42.441947] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.374 [2024-07-20 19:03:42.441972] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.374 [2024-07-20 19:03:42.441994] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.374 [2024-07-20 19:03:42.445578] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.374 [2024-07-20 19:03:42.454668] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.374 [2024-07-20 19:03:42.455366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-07-20 19:03:42.455420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.374 [2024-07-20 19:03:42.455439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.374 [2024-07-20 19:03:42.455679] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.374 [2024-07-20 19:03:42.455940] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.374 [2024-07-20 19:03:42.455966] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.374 [2024-07-20 19:03:42.455982] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.374 [2024-07-20 19:03:42.459558] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.374 [2024-07-20 19:03:42.468550] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.374 [2024-07-20 19:03:42.469188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-07-20 19:03:42.469234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.374 [2024-07-20 19:03:42.469255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.374 [2024-07-20 19:03:42.469501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.374 [2024-07-20 19:03:42.469747] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.374 [2024-07-20 19:03:42.469772] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.374 [2024-07-20 19:03:42.469789] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.374 [2024-07-20 19:03:42.473333] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.374 [2024-07-20 19:03:42.482439] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.374 [2024-07-20 19:03:42.482957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-07-20 19:03:42.482997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.374 [2024-07-20 19:03:42.483013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.374 [2024-07-20 19:03:42.483269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.374 [2024-07-20 19:03:42.483513] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.374 [2024-07-20 19:03:42.483539] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.374 [2024-07-20 19:03:42.483556] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.374 [2024-07-20 19:03:42.487084] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.374 [2024-07-20 19:03:42.496398] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.374 [2024-07-20 19:03:42.496934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-07-20 19:03:42.496976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.374 [2024-07-20 19:03:42.497009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.374 [2024-07-20 19:03:42.497258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.374 [2024-07-20 19:03:42.497502] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.374 [2024-07-20 19:03:42.497527] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.374 [2024-07-20 19:03:42.497544] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.374 [2024-07-20 19:03:42.501135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.374 [2024-07-20 19:03:42.510430] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.374 [2024-07-20 19:03:42.510985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-07-20 19:03:42.511019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.374 [2024-07-20 19:03:42.511038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.374 [2024-07-20 19:03:42.511278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.374 [2024-07-20 19:03:42.511522] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.374 [2024-07-20 19:03:42.511548] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.374 [2024-07-20 19:03:42.511565] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.374 [2024-07-20 19:03:42.515247] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.374 [2024-07-20 19:03:42.524331] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.374 [2024-07-20 19:03:42.524923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-07-20 19:03:42.524952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.374 [2024-07-20 19:03:42.524984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.374 [2024-07-20 19:03:42.525235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.374 [2024-07-20 19:03:42.525479] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.374 [2024-07-20 19:03:42.525505] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.374 [2024-07-20 19:03:42.525522] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.374 [2024-07-20 19:03:42.529088] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.374 [2024-07-20 19:03:42.538253] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.374 [2024-07-20 19:03:42.538925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-07-20 19:03:42.538954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.374 [2024-07-20 19:03:42.538986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.374 [2024-07-20 19:03:42.539229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.374 [2024-07-20 19:03:42.539480] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.374 [2024-07-20 19:03:42.539506] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.374 [2024-07-20 19:03:42.539523] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.374 [2024-07-20 19:03:42.543108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.374 [2024-07-20 19:03:42.552140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.374 [2024-07-20 19:03:42.552878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-07-20 19:03:42.552907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.374 [2024-07-20 19:03:42.552924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.374 [2024-07-20 19:03:42.553162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.374 [2024-07-20 19:03:42.553407] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.374 [2024-07-20 19:03:42.553433] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.374 [2024-07-20 19:03:42.553449] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.374 [2024-07-20 19:03:42.556990] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.374 [2024-07-20 19:03:42.565969] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.374 [2024-07-20 19:03:42.566560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-07-20 19:03:42.566592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.374 [2024-07-20 19:03:42.566610] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.374 [2024-07-20 19:03:42.566871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.374 [2024-07-20 19:03:42.567093] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.374 [2024-07-20 19:03:42.567119] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.374 [2024-07-20 19:03:42.567136] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.374 [2024-07-20 19:03:42.570714] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.374 [2024-07-20 19:03:42.579807] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.375 [2024-07-20 19:03:42.580347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-07-20 19:03:42.580379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.375 [2024-07-20 19:03:42.580398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.375 [2024-07-20 19:03:42.580638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.375 [2024-07-20 19:03:42.580897] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.375 [2024-07-20 19:03:42.580923] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.375 [2024-07-20 19:03:42.580941] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.375 [2024-07-20 19:03:42.584522] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.375 [2024-07-20 19:03:42.593817] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.375 [2024-07-20 19:03:42.594342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-07-20 19:03:42.594374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.375 [2024-07-20 19:03:42.594392] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.375 [2024-07-20 19:03:42.594631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.375 [2024-07-20 19:03:42.594888] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.375 [2024-07-20 19:03:42.594915] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.375 [2024-07-20 19:03:42.594932] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.375 [2024-07-20 19:03:42.598511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.375 [2024-07-20 19:03:42.607808] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.375 [2024-07-20 19:03:42.608465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-07-20 19:03:42.608523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.375 [2024-07-20 19:03:42.608544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.375 [2024-07-20 19:03:42.608791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.375 [2024-07-20 19:03:42.609051] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.375 [2024-07-20 19:03:42.609077] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.375 [2024-07-20 19:03:42.609094] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.375 [2024-07-20 19:03:42.612677] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.375 [2024-07-20 19:03:42.621765] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.375 [2024-07-20 19:03:42.622286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-07-20 19:03:42.622322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.375 [2024-07-20 19:03:42.622340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.375 [2024-07-20 19:03:42.622581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.375 [2024-07-20 19:03:42.622846] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.375 [2024-07-20 19:03:42.622874] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.375 [2024-07-20 19:03:42.622891] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.375 [2024-07-20 19:03:42.626471] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.375 [2024-07-20 19:03:42.635763] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.375 [2024-07-20 19:03:42.636305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-07-20 19:03:42.636338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.375 [2024-07-20 19:03:42.636366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.375 [2024-07-20 19:03:42.636607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.375 [2024-07-20 19:03:42.636865] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.375 [2024-07-20 19:03:42.636891] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.375 [2024-07-20 19:03:42.636908] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.375 [2024-07-20 19:03:42.640486] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.375 [2024-07-20 19:03:42.649778] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.375 [2024-07-20 19:03:42.650287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-07-20 19:03:42.650320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.375 [2024-07-20 19:03:42.650338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.375 [2024-07-20 19:03:42.650578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.375 [2024-07-20 19:03:42.650835] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.375 [2024-07-20 19:03:42.650862] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.375 [2024-07-20 19:03:42.650879] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.375 [2024-07-20 19:03:42.654453] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.375 [2024-07-20 19:03:42.663740] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.375 [2024-07-20 19:03:42.664271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-07-20 19:03:42.664304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.375 [2024-07-20 19:03:42.664322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.375 [2024-07-20 19:03:42.664561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.375 [2024-07-20 19:03:42.664819] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.375 [2024-07-20 19:03:42.664845] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.375 [2024-07-20 19:03:42.664862] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.375 [2024-07-20 19:03:42.668438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.375 [2024-07-20 19:03:42.677758] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.375 [2024-07-20 19:03:42.678400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-07-20 19:03:42.678452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.375 [2024-07-20 19:03:42.678471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.375 [2024-07-20 19:03:42.678710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.375 [2024-07-20 19:03:42.678969] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.375 [2024-07-20 19:03:42.679001] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.375 [2024-07-20 19:03:42.679018] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.375 [2024-07-20 19:03:42.682596] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.375 [2024-07-20 19:03:42.691710] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.375 [2024-07-20 19:03:42.692283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-07-20 19:03:42.692336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.375 [2024-07-20 19:03:42.692355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.375 [2024-07-20 19:03:42.692595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.375 [2024-07-20 19:03:42.692862] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.375 [2024-07-20 19:03:42.692890] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.375 [2024-07-20 19:03:42.692909] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.633 [2024-07-20 19:03:42.696712] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.633 [2024-07-20 19:03:42.705708] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.633 [2024-07-20 19:03:42.706232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.633 [2024-07-20 19:03:42.706267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.633 [2024-07-20 19:03:42.706287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.633 [2024-07-20 19:03:42.706528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.633 [2024-07-20 19:03:42.706773] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.633 [2024-07-20 19:03:42.706807] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.633 [2024-07-20 19:03:42.706826] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.633 [2024-07-20 19:03:42.710398] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.633 [2024-07-20 19:03:42.719556] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.633 [2024-07-20 19:03:42.720073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.633 [2024-07-20 19:03:42.720105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.633 [2024-07-20 19:03:42.720124] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.633 [2024-07-20 19:03:42.720364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.633 [2024-07-20 19:03:42.720608] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.633 [2024-07-20 19:03:42.720633] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.633 [2024-07-20 19:03:42.720650] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.633 [2024-07-20 19:03:42.724231] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.633 [2024-07-20 19:03:42.733522] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.633 [2024-07-20 19:03:42.734058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.633 [2024-07-20 19:03:42.734101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.633 [2024-07-20 19:03:42.734120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.633 [2024-07-20 19:03:42.734359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.633 [2024-07-20 19:03:42.734602] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.633 [2024-07-20 19:03:42.734628] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.633 [2024-07-20 19:03:42.734644] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.633 [2024-07-20 19:03:42.738225] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.633 [2024-07-20 19:03:42.747516] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.633 [2024-07-20 19:03:42.748059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.633 [2024-07-20 19:03:42.748092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.633 [2024-07-20 19:03:42.748111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.633 [2024-07-20 19:03:42.748350] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.633 [2024-07-20 19:03:42.748593] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.633 [2024-07-20 19:03:42.748618] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.634 [2024-07-20 19:03:42.748635] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.634 [2024-07-20 19:03:42.752235] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.634 [2024-07-20 19:03:42.761583] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.634 [2024-07-20 19:03:42.762102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.634 [2024-07-20 19:03:42.762135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.634 [2024-07-20 19:03:42.762153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.634 [2024-07-20 19:03:42.762393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.634 [2024-07-20 19:03:42.762637] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.634 [2024-07-20 19:03:42.762663] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.634 [2024-07-20 19:03:42.762679] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.634 [2024-07-20 19:03:42.766271] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.634 [2024-07-20 19:03:42.775573] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.634 [2024-07-20 19:03:42.776061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.634 [2024-07-20 19:03:42.776093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.634 [2024-07-20 19:03:42.776112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.634 [2024-07-20 19:03:42.776357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.634 [2024-07-20 19:03:42.776602] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.634 [2024-07-20 19:03:42.776627] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.634 [2024-07-20 19:03:42.776644] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.634 [2024-07-20 19:03:42.780239] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.634 [2024-07-20 19:03:42.789539] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.634 [2024-07-20 19:03:42.790069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.634 [2024-07-20 19:03:42.790101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.634 [2024-07-20 19:03:42.790120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.634 [2024-07-20 19:03:42.790359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.634 [2024-07-20 19:03:42.790603] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.634 [2024-07-20 19:03:42.790628] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.634 [2024-07-20 19:03:42.790645] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.634 [2024-07-20 19:03:42.794232] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.634 [2024-07-20 19:03:42.803525] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.634 [2024-07-20 19:03:42.804029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.634 [2024-07-20 19:03:42.804061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.634 [2024-07-20 19:03:42.804079] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.634 [2024-07-20 19:03:42.804319] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.634 [2024-07-20 19:03:42.804563] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.634 [2024-07-20 19:03:42.804588] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.634 [2024-07-20 19:03:42.804604] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.634 [2024-07-20 19:03:42.808192] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.634 [2024-07-20 19:03:42.817488] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.634 [2024-07-20 19:03:42.818018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.634 [2024-07-20 19:03:42.818047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.634 [2024-07-20 19:03:42.818063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.634 [2024-07-20 19:03:42.818325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.634 [2024-07-20 19:03:42.818569] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.634 [2024-07-20 19:03:42.818595] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.634 [2024-07-20 19:03:42.818617] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.634 [2024-07-20 19:03:42.822217] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.634 [2024-07-20 19:03:42.831336] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.634 [2024-07-20 19:03:42.831968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.634 [2024-07-20 19:03:42.832001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.634 [2024-07-20 19:03:42.832020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.634 [2024-07-20 19:03:42.832259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.634 [2024-07-20 19:03:42.832502] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.634 [2024-07-20 19:03:42.832528] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.634 [2024-07-20 19:03:42.832545] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.634 [2024-07-20 19:03:42.836158] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.634 [2024-07-20 19:03:42.845253] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.634 [2024-07-20 19:03:42.845801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.634 [2024-07-20 19:03:42.845833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.634 [2024-07-20 19:03:42.845852] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.634 [2024-07-20 19:03:42.846091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.634 [2024-07-20 19:03:42.846334] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.634 [2024-07-20 19:03:42.846359] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.634 [2024-07-20 19:03:42.846375] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.634 [2024-07-20 19:03:42.849961] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.634 [2024-07-20 19:03:42.859263] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.634 [2024-07-20 19:03:42.859790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.634 [2024-07-20 19:03:42.859832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.634 [2024-07-20 19:03:42.859852] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.634 [2024-07-20 19:03:42.860091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.634 [2024-07-20 19:03:42.860334] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.634 [2024-07-20 19:03:42.860359] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.634 [2024-07-20 19:03:42.860376] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.634 [2024-07-20 19:03:42.863965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.634 [2024-07-20 19:03:42.873255] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.634 [2024-07-20 19:03:42.873808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.634 [2024-07-20 19:03:42.873841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.634 [2024-07-20 19:03:42.873859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.634 [2024-07-20 19:03:42.874099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.634 [2024-07-20 19:03:42.874342] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.634 [2024-07-20 19:03:42.874367] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.634 [2024-07-20 19:03:42.874384] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.634 [2024-07-20 19:03:42.877974] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.634 [2024-07-20 19:03:42.887265] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.634 [2024-07-20 19:03:42.887799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.634 [2024-07-20 19:03:42.887841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.634 [2024-07-20 19:03:42.887861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.634 [2024-07-20 19:03:42.888102] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.634 [2024-07-20 19:03:42.888345] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.634 [2024-07-20 19:03:42.888370] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.634 [2024-07-20 19:03:42.888387] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.634 [2024-07-20 19:03:42.891982] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.634 [2024-07-20 19:03:42.901273] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.634 [2024-07-20 19:03:42.901774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.634 [2024-07-20 19:03:42.901815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.634 [2024-07-20 19:03:42.901835] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.634 [2024-07-20 19:03:42.902075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.634 [2024-07-20 19:03:42.902318] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.634 [2024-07-20 19:03:42.902344] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.634 [2024-07-20 19:03:42.902361] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.634 [2024-07-20 19:03:42.905947] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.634 [2024-07-20 19:03:42.915237] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.634 [2024-07-20 19:03:42.915838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.634 [2024-07-20 19:03:42.915870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.635 [2024-07-20 19:03:42.915888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.635 [2024-07-20 19:03:42.916133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.635 [2024-07-20 19:03:42.916377] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.635 [2024-07-20 19:03:42.916402] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.635 [2024-07-20 19:03:42.916419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.635 [2024-07-20 19:03:42.920011] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.635 [2024-07-20 19:03:42.929090] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.635 [2024-07-20 19:03:42.929622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.635 [2024-07-20 19:03:42.929650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.635 [2024-07-20 19:03:42.929666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.635 [2024-07-20 19:03:42.929943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.635 [2024-07-20 19:03:42.930187] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.635 [2024-07-20 19:03:42.930213] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.635 [2024-07-20 19:03:42.930229] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.635 [2024-07-20 19:03:42.933814] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.635 [2024-07-20 19:03:42.943103] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.635 [2024-07-20 19:03:42.943603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.635 [2024-07-20 19:03:42.943635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.635 [2024-07-20 19:03:42.943653] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.635 [2024-07-20 19:03:42.943908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.635 [2024-07-20 19:03:42.944152] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.635 [2024-07-20 19:03:42.944178] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.635 [2024-07-20 19:03:42.944194] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.635 [2024-07-20 19:03:42.947772] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.891 [2024-07-20 19:03:42.957126] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:42.957637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:42.957667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:42.957684] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:42.957956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:42.958200] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:42.958227] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:42.958249] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:42.961933] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:42.970873] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:42.971409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:42.971442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:42.971462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:42.971701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:42.971957] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:42.971983] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:42.972000] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:42.975583] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:42.984895] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:42.985398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:42.985431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:42.985450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:42.985689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:42.985948] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:42.985975] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:42.985992] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:42.989574] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:42.998878] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:42.999429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:42.999461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:42.999480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:42.999720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:42.999977] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:43.000004] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:43.000021] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:43.003599] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:43.012901] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:43.013409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:43.013446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:43.013466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:43.013705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:43.013964] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:43.013990] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:43.014007] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:43.017585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:43.026885] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:43.027386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:43.027418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:43.027436] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:43.027674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:43.027932] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:43.027958] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:43.027975] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:43.031553] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:43.040849] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:43.041374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:43.041406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:43.041425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:43.041664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:43.041922] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:43.041949] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:43.041966] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:43.045546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:43.054847] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:43.055637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:43.055688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:43.055706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:43.055960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:43.056210] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:43.056236] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:43.056253] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:43.059837] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:43.068698] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:43.069489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:43.069542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:43.069561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:43.069813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:43.070058] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:43.070083] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:43.070100] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:43.073679] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:43.082576] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:43.083115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:43.083147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:43.083165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:43.083404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:43.083647] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:43.083673] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:43.083690] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:43.087279] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:43.096573] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:43.097093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:43.097127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:43.097146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:43.097385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:43.097628] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:43.097654] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:43.097671] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:43.101266] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:43.110570] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:43.111099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:43.111131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:43.111150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:43.111389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:43.111633] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:43.111658] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:43.111675] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:43.115267] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:43.124428] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:43.124951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:43.124983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:43.125001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:43.125240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:43.125484] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:43.125509] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:43.125525] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:43.129099] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:43.138378] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:43.138911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:43.138943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:43.138961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:43.139200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:43.139443] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:43.139468] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:43.139485] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:43.143079] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:43.152361] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:43.152889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:43.152920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:43.152944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:43.153185] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:43.153428] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:43.153453] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:43.153470] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:43.157055] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:43.166337] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:43.166862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:43.166895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:43.166914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:43.167153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:43.167395] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:43.167420] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:43.167437] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:43.171030] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:43.179960] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.892 [2024-07-20 19:03:43.180436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.892 [2024-07-20 19:03:43.180465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.892 [2024-07-20 19:03:43.180481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.892 [2024-07-20 19:03:43.180696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.892 [2024-07-20 19:03:43.180934] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.892 [2024-07-20 19:03:43.180958] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.892 [2024-07-20 19:03:43.180974] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.892 [2024-07-20 19:03:43.184132] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.892 [2024-07-20 19:03:43.193191] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.893 [2024-07-20 19:03:43.193668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.893 [2024-07-20 19:03:43.193696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.893 [2024-07-20 19:03:43.193713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.893 [2024-07-20 19:03:43.193954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.893 [2024-07-20 19:03:43.194168] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.893 [2024-07-20 19:03:43.194194] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.893 [2024-07-20 19:03:43.194208] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.893 [2024-07-20 19:03:43.197165] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.893 [2024-07-20 19:03:43.206429] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.893 [2024-07-20 19:03:43.206931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.893 [2024-07-20 19:03:43.206961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:32.893 [2024-07-20 19:03:43.206993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:32.893 [2024-07-20 19:03:43.207220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:32.893 [2024-07-20 19:03:43.207414] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.893 [2024-07-20 19:03:43.207435] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.893 [2024-07-20 19:03:43.207449] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.893 [2024-07-20 19:03:43.210462] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.150 [2024-07-20 19:03:43.220056] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.150 [2024-07-20 19:03:43.220618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.150 [2024-07-20 19:03:43.220649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.150 [2024-07-20 19:03:43.220666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.150 [2024-07-20 19:03:43.220939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.150 [2024-07-20 19:03:43.221154] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.150 [2024-07-20 19:03:43.221176] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.150 [2024-07-20 19:03:43.221190] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.150 [2024-07-20 19:03:43.224444] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.150 [2024-07-20 19:03:43.233653] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.150 [2024-07-20 19:03:43.234165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.150 [2024-07-20 19:03:43.234195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.150 [2024-07-20 19:03:43.234211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.150 [2024-07-20 19:03:43.234423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.150 [2024-07-20 19:03:43.234632] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.150 [2024-07-20 19:03:43.234653] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.150 [2024-07-20 19:03:43.234666] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.150 [2024-07-20 19:03:43.237660] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.150 [2024-07-20 19:03:43.246955] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.150 [2024-07-20 19:03:43.247659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.150 [2024-07-20 19:03:43.247697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.150 [2024-07-20 19:03:43.247714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.150 [2024-07-20 19:03:43.247959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.150 [2024-07-20 19:03:43.248175] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.150 [2024-07-20 19:03:43.248197] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.150 [2024-07-20 19:03:43.248211] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.150 [2024-07-20 19:03:43.251173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.150 [2024-07-20 19:03:43.260243] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.150 [2024-07-20 19:03:43.260938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.150 [2024-07-20 19:03:43.260978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.150 [2024-07-20 19:03:43.260995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.150 [2024-07-20 19:03:43.261226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.150 [2024-07-20 19:03:43.261421] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.150 [2024-07-20 19:03:43.261443] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.150 [2024-07-20 19:03:43.261456] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.150 [2024-07-20 19:03:43.264421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.150 [2024-07-20 19:03:43.273548] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.150 [2024-07-20 19:03:43.274183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.151 [2024-07-20 19:03:43.274221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.151 [2024-07-20 19:03:43.274239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.151 [2024-07-20 19:03:43.274454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.151 [2024-07-20 19:03:43.274648] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.151 [2024-07-20 19:03:43.274668] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.151 [2024-07-20 19:03:43.274682] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.151 [2024-07-20 19:03:43.277646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.151 [2024-07-20 19:03:43.286898] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.151 [2024-07-20 19:03:43.287674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.151 [2024-07-20 19:03:43.287711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.151 [2024-07-20 19:03:43.287728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.151 [2024-07-20 19:03:43.287962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.151 [2024-07-20 19:03:43.288176] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.151 [2024-07-20 19:03:43.288198] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.151 [2024-07-20 19:03:43.288211] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.151 [2024-07-20 19:03:43.291177] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.151 [2024-07-20 19:03:43.300241] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.151 [2024-07-20 19:03:43.300754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.151 [2024-07-20 19:03:43.300784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.151 [2024-07-20 19:03:43.300810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.151 [2024-07-20 19:03:43.301048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.151 [2024-07-20 19:03:43.301259] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.151 [2024-07-20 19:03:43.301280] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.151 [2024-07-20 19:03:43.301293] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.151 [2024-07-20 19:03:43.304275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.151 [2024-07-20 19:03:43.313584] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.151 [2024-07-20 19:03:43.314121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.151 [2024-07-20 19:03:43.314152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.151 [2024-07-20 19:03:43.314169] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.151 [2024-07-20 19:03:43.314417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.151 [2024-07-20 19:03:43.314612] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.151 [2024-07-20 19:03:43.314633] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.151 [2024-07-20 19:03:43.314646] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.151 [2024-07-20 19:03:43.317633] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.151 [2024-07-20 19:03:43.326887] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.151 [2024-07-20 19:03:43.327657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.151 [2024-07-20 19:03:43.327695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.151 [2024-07-20 19:03:43.327712] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.151 [2024-07-20 19:03:43.327954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.151 [2024-07-20 19:03:43.328168] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.151 [2024-07-20 19:03:43.328190] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.151 [2024-07-20 19:03:43.328212] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.151 [2024-07-20 19:03:43.331175] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.151 [2024-07-20 19:03:43.340200] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.151 [2024-07-20 19:03:43.340645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.151 [2024-07-20 19:03:43.340674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.151 [2024-07-20 19:03:43.340690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.151 [2024-07-20 19:03:43.340937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.151 [2024-07-20 19:03:43.341151] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.151 [2024-07-20 19:03:43.341173] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.151 [2024-07-20 19:03:43.341187] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.151 [2024-07-20 19:03:43.344144] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.151 [2024-07-20 19:03:43.353370] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.151 [2024-07-20 19:03:43.353850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.151 [2024-07-20 19:03:43.353880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.151 [2024-07-20 19:03:43.353897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.151 [2024-07-20 19:03:43.354150] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.151 [2024-07-20 19:03:43.354360] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.151 [2024-07-20 19:03:43.354381] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.151 [2024-07-20 19:03:43.354395] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.151 [2024-07-20 19:03:43.357358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.151 [2024-07-20 19:03:43.366586] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.151 [2024-07-20 19:03:43.367091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.151 [2024-07-20 19:03:43.367120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.151 [2024-07-20 19:03:43.367137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.151 [2024-07-20 19:03:43.367378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.151 [2024-07-20 19:03:43.367572] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.151 [2024-07-20 19:03:43.367593] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.151 [2024-07-20 19:03:43.367607] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.151 [2024-07-20 19:03:43.370567] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.151 [2024-07-20 19:03:43.379827] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.151 [2024-07-20 19:03:43.380443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.151 [2024-07-20 19:03:43.380499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.151 [2024-07-20 19:03:43.380517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.151 [2024-07-20 19:03:43.380748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.151 [2024-07-20 19:03:43.380981] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.151 [2024-07-20 19:03:43.381005] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.151 [2024-07-20 19:03:43.381020] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.151 [2024-07-20 19:03:43.384133] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.151 [2024-07-20 19:03:43.393140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.151 [2024-07-20 19:03:43.393585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.151 [2024-07-20 19:03:43.393614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.151 [2024-07-20 19:03:43.393630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.151 [2024-07-20 19:03:43.393908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.151 [2024-07-20 19:03:43.394109] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.151 [2024-07-20 19:03:43.394131] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.151 [2024-07-20 19:03:43.394145] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.151 [2024-07-20 19:03:43.397121] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.151 [2024-07-20 19:03:43.406375] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.151 [2024-07-20 19:03:43.406815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.151 [2024-07-20 19:03:43.406844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.151 [2024-07-20 19:03:43.406862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.151 [2024-07-20 19:03:43.407086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.151 [2024-07-20 19:03:43.407299] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.151 [2024-07-20 19:03:43.407321] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.152 [2024-07-20 19:03:43.407334] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.152 [2024-07-20 19:03:43.410339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.152 [2024-07-20 19:03:43.419573] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.152 [2024-07-20 19:03:43.420072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.152 [2024-07-20 19:03:43.420115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.152 [2024-07-20 19:03:43.420131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.152 [2024-07-20 19:03:43.420358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.152 [2024-07-20 19:03:43.420559] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.152 [2024-07-20 19:03:43.420581] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.152 [2024-07-20 19:03:43.420594] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.152 [2024-07-20 19:03:43.423557] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.152 [2024-07-20 19:03:43.432848] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.152 [2024-07-20 19:03:43.433379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.152 [2024-07-20 19:03:43.433408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.152 [2024-07-20 19:03:43.433424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.152 [2024-07-20 19:03:43.433666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.152 [2024-07-20 19:03:43.433889] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.152 [2024-07-20 19:03:43.433912] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.152 [2024-07-20 19:03:43.433925] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.152 [2024-07-20 19:03:43.436961] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1539216 Killed "${NVMF_APP[@]}" "$@" 00:33:33.152 19:03:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:33.152 19:03:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:33.152 19:03:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:33.152 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:33.152 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:33.152 [2024-07-20 19:03:43.446219] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.152 [2024-07-20 19:03:43.446656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.152 [2024-07-20 19:03:43.446683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.152 [2024-07-20 19:03:43.446699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.152 [2024-07-20 19:03:43.446960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.152 [2024-07-20 19:03:43.447180] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.152 [2024-07-20 19:03:43.447202] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.152 [2024-07-20 19:03:43.447215] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.152 19:03:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1540169 00:33:33.152 19:03:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:33.152 19:03:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1540169 00:33:33.152 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1540169 ']' 00:33:33.152 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.152 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:33.152 [2024-07-20 19:03:43.450234] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.152 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.152 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:33.152 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:33.152 [2024-07-20 19:03:43.459495] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.152 [2024-07-20 19:03:43.459948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.152 [2024-07-20 19:03:43.459977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.152 [2024-07-20 19:03:43.459994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.152 [2024-07-20 19:03:43.460230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.152 [2024-07-20 19:03:43.460425] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.152 [2024-07-20 19:03:43.460445] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.152 [2024-07-20 19:03:43.460459] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.152 [2024-07-20 19:03:43.463503] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.410 [2024-07-20 19:03:43.473333] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.410 [2024-07-20 19:03:43.473815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.410 [2024-07-20 19:03:43.473847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.410 [2024-07-20 19:03:43.473865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.410 [2024-07-20 19:03:43.474098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.410 [2024-07-20 19:03:43.474334] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.410 [2024-07-20 19:03:43.474370] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.410 [2024-07-20 19:03:43.474385] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.410 [2024-07-20 19:03:43.477881] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.410 [2024-07-20 19:03:43.486727] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.410 [2024-07-20 19:03:43.487242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.410 [2024-07-20 19:03:43.487272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.411 [2024-07-20 19:03:43.487292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.411 [2024-07-20 19:03:43.487522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.411 [2024-07-20 19:03:43.487716] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.411 [2024-07-20 19:03:43.487737] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.411 [2024-07-20 19:03:43.487750] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.411 [2024-07-20 19:03:43.490902] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.411 [2024-07-20 19:03:43.498499] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:33.411 [2024-07-20 19:03:43.498581] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.411 [2024-07-20 19:03:43.500187] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.411 [2024-07-20 19:03:43.500712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.411 [2024-07-20 19:03:43.500742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.411 [2024-07-20 19:03:43.500768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.411 [2024-07-20 19:03:43.500992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.411 [2024-07-20 19:03:43.501233] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.411 [2024-07-20 19:03:43.501254] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.411 [2024-07-20 19:03:43.501269] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.411 [2024-07-20 19:03:43.504469] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.411 [2024-07-20 19:03:43.513765] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.411 [2024-07-20 19:03:43.514271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.411 [2024-07-20 19:03:43.514302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.411 [2024-07-20 19:03:43.514320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.411 [2024-07-20 19:03:43.514571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.411 [2024-07-20 19:03:43.514771] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.411 [2024-07-20 19:03:43.514828] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.411 [2024-07-20 19:03:43.514842] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.411 [2024-07-20 19:03:43.518068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.411 [2024-07-20 19:03:43.527199] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.411 [2024-07-20 19:03:43.527723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.411 [2024-07-20 19:03:43.527765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.411 [2024-07-20 19:03:43.527789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.411 [2024-07-20 19:03:43.528034] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.411 [2024-07-20 19:03:43.528286] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.411 [2024-07-20 19:03:43.528306] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.411 [2024-07-20 19:03:43.528320] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.411 [2024-07-20 19:03:43.531521] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.411 EAL: No free 2048 kB hugepages reported on node 1 00:33:33.411 [2024-07-20 19:03:43.541015] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.411 [2024-07-20 19:03:43.541535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.411 [2024-07-20 19:03:43.541564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.411 [2024-07-20 19:03:43.541581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.411 [2024-07-20 19:03:43.541810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.411 [2024-07-20 19:03:43.542031] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.411 [2024-07-20 19:03:43.542054] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.411 [2024-07-20 19:03:43.542069] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.411 [2024-07-20 19:03:43.545409] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.411 [2024-07-20 19:03:43.554540] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.411 [2024-07-20 19:03:43.555020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.411 [2024-07-20 19:03:43.555048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.411 [2024-07-20 19:03:43.555065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.411 [2024-07-20 19:03:43.555304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.411 [2024-07-20 19:03:43.555511] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.411 [2024-07-20 19:03:43.555531] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.411 [2024-07-20 19:03:43.555545] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.411 [2024-07-20 19:03:43.558668] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.411 [2024-07-20 19:03:43.567992] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.411 [2024-07-20 19:03:43.568546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.411 [2024-07-20 19:03:43.568598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.411 [2024-07-20 19:03:43.568615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.411 [2024-07-20 19:03:43.568868] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.411 [2024-07-20 19:03:43.569101] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.411 [2024-07-20 19:03:43.569122] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.411 [2024-07-20 19:03:43.569135] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.411 [2024-07-20 19:03:43.570600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:33.411 [2024-07-20 19:03:43.572352] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.411 [2024-07-20 19:03:43.581526] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.411 [2024-07-20 19:03:43.582306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.411 [2024-07-20 19:03:43.582374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.411 [2024-07-20 19:03:43.582415] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.411 [2024-07-20 19:03:43.582654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.411 [2024-07-20 19:03:43.582875] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.411 [2024-07-20 19:03:43.582897] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.411 [2024-07-20 19:03:43.582913] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.411 [2024-07-20 19:03:43.586008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.411 [2024-07-20 19:03:43.595115] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.411 [2024-07-20 19:03:43.595880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.411 [2024-07-20 19:03:43.595915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.411 [2024-07-20 19:03:43.595949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.411 [2024-07-20 19:03:43.596177] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.411 [2024-07-20 19:03:43.596392] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.411 [2024-07-20 19:03:43.596413] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.411 [2024-07-20 19:03:43.596428] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.411 [2024-07-20 19:03:43.599513] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.411 [2024-07-20 19:03:43.608517] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.411 [2024-07-20 19:03:43.609040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.411 [2024-07-20 19:03:43.609070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.411 [2024-07-20 19:03:43.609087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.411 [2024-07-20 19:03:43.609333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.411 [2024-07-20 19:03:43.609539] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.411 [2024-07-20 19:03:43.609559] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.411 [2024-07-20 19:03:43.609573] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.411 [2024-07-20 19:03:43.612609] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.411 [2024-07-20 19:03:43.621875] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.411 [2024-07-20 19:03:43.622570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.411 [2024-07-20 19:03:43.622637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.411 [2024-07-20 19:03:43.622658] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.411 [2024-07-20 19:03:43.622929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.412 [2024-07-20 19:03:43.623169] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.412 [2024-07-20 19:03:43.623202] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.412 [2024-07-20 19:03:43.623220] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.412 [2024-07-20 19:03:43.626312] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.412 [2024-07-20 19:03:43.635349] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.412 [2024-07-20 19:03:43.636057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.412 [2024-07-20 19:03:43.636100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.412 [2024-07-20 19:03:43.636118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.412 [2024-07-20 19:03:43.636347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.412 [2024-07-20 19:03:43.636555] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.412 [2024-07-20 19:03:43.636576] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.412 [2024-07-20 19:03:43.636591] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.412 [2024-07-20 19:03:43.639668] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.412 [2024-07-20 19:03:43.648676] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.412 [2024-07-20 19:03:43.649297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.412 [2024-07-20 19:03:43.649352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.412 [2024-07-20 19:03:43.649370] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.412 [2024-07-20 19:03:43.649615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.412 [2024-07-20 19:03:43.649853] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.412 [2024-07-20 19:03:43.649875] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.412 [2024-07-20 19:03:43.649890] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.412 [2024-07-20 19:03:43.652967] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.412 [2024-07-20 19:03:43.658347] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.412 [2024-07-20 19:03:43.658380] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.412 [2024-07-20 19:03:43.658408] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.412 [2024-07-20 19:03:43.658421] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.412 [2024-07-20 19:03:43.658431] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.412 [2024-07-20 19:03:43.658612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:33.412 [2024-07-20 19:03:43.658674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:33.412 [2024-07-20 19:03:43.658677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.412 [2024-07-20 19:03:43.662271] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.412 [2024-07-20 19:03:43.662849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.412 [2024-07-20 19:03:43.662884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.412 [2024-07-20 19:03:43.662911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.412 [2024-07-20 19:03:43.663134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.412 [2024-07-20 19:03:43.663357] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.412 [2024-07-20 19:03:43.663379] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.412 [2024-07-20 19:03:43.663395] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.412 [2024-07-20 19:03:43.666686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.412 [2024-07-20 19:03:43.675953] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.412 [2024-07-20 19:03:43.676637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.412 [2024-07-20 19:03:43.676680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.412 [2024-07-20 19:03:43.676701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.412 [2024-07-20 19:03:43.676942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.412 [2024-07-20 19:03:43.677166] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.412 [2024-07-20 19:03:43.677189] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.412 [2024-07-20 19:03:43.677205] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.412 [2024-07-20 19:03:43.680479] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.412 [2024-07-20 19:03:43.689536] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.412 [2024-07-20 19:03:43.690288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.412 [2024-07-20 19:03:43.690341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.412 [2024-07-20 19:03:43.690363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.412 [2024-07-20 19:03:43.690596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.412 [2024-07-20 19:03:43.690832] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.412 [2024-07-20 19:03:43.690855] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.412 [2024-07-20 19:03:43.690872] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.412 [2024-07-20 19:03:43.694092] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.412 [2024-07-20 19:03:43.703193] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.412 [2024-07-20 19:03:43.703938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.412 [2024-07-20 19:03:43.703984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.412 [2024-07-20 19:03:43.704004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.412 [2024-07-20 19:03:43.704230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.412 [2024-07-20 19:03:43.704453] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.412 [2024-07-20 19:03:43.704491] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.412 [2024-07-20 19:03:43.704509] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.412 [2024-07-20 19:03:43.707747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.412 [2024-07-20 19:03:43.716766] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.412 [2024-07-20 19:03:43.717455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.412 [2024-07-20 19:03:43.717510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.412 [2024-07-20 19:03:43.717531] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.412 [2024-07-20 19:03:43.717764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.412 [2024-07-20 19:03:43.717999] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.412 [2024-07-20 19:03:43.718022] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.412 [2024-07-20 19:03:43.718039] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.412 [2024-07-20 19:03:43.721307] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.412 [2024-07-20 19:03:43.730514] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.412 [2024-07-20 19:03:43.731347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.412 [2024-07-20 19:03:43.731402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.412 [2024-07-20 19:03:43.731435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.412 [2024-07-20 19:03:43.731705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.412 [2024-07-20 19:03:43.731943] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.412 [2024-07-20 19:03:43.731968] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.412 [2024-07-20 19:03:43.731986] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.671 [2024-07-20 19:03:43.735283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.671 [2024-07-20 19:03:43.744135] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.671 [2024-07-20 19:03:43.744669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.671 [2024-07-20 19:03:43.744704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.671 [2024-07-20 19:03:43.744723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.671 [2024-07-20 19:03:43.744953] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.671 [2024-07-20 19:03:43.745176] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.671 [2024-07-20 19:03:43.745198] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.671 [2024-07-20 19:03:43.745214] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.671 [2024-07-20 19:03:43.748478] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.671 [2024-07-20 19:03:43.757681] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.671 [2024-07-20 19:03:43.758191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.671 [2024-07-20 19:03:43.758220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.671 [2024-07-20 19:03:43.758237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.671 [2024-07-20 19:03:43.758452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.671 [2024-07-20 19:03:43.758672] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.671 [2024-07-20 19:03:43.758693] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.671 [2024-07-20 19:03:43.758708] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.671 [2024-07-20 19:03:43.761936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.671 [2024-07-20 19:03:43.771237] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.671 [2024-07-20 19:03:43.771722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.671 [2024-07-20 19:03:43.771749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.671 [2024-07-20 19:03:43.771766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.671 [2024-07-20 19:03:43.771990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.671 [2024-07-20 19:03:43.772209] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.671 [2024-07-20 19:03:43.772231] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.671 [2024-07-20 19:03:43.772245] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.671 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:33.671 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:33.671 19:03:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:33.671 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:33.671 [2024-07-20 19:03:43.775477] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.671 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:33.671 [2024-07-20 19:03:43.784920] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.671 [2024-07-20 19:03:43.785393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.671 [2024-07-20 19:03:43.785422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.671 [2024-07-20 19:03:43.785438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.671 [2024-07-20 19:03:43.785653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.671 [2024-07-20 19:03:43.785881] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.671 [2024-07-20 19:03:43.785903] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.671 [2024-07-20 19:03:43.785917] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.671 [2024-07-20 19:03:43.789189] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.671 [2024-07-20 19:03:43.798440] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.671 [2024-07-20 19:03:43.798895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.672 [2024-07-20 19:03:43.798924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.672 [2024-07-20 19:03:43.798941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.672 [2024-07-20 19:03:43.799156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:33.672 [2024-07-20 19:03:43.799375] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.672 [2024-07-20 19:03:43.799399] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.672 [2024-07-20 19:03:43.799413] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:33.672 [2024-07-20 19:03:43.802649] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.672 [2024-07-20 19:03:43.804251] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:33.672 [2024-07-20 19:03:43.812109] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.672 [2024-07-20 19:03:43.812560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.672 [2024-07-20 19:03:43.812588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.672 [2024-07-20 19:03:43.812604] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.672 [2024-07-20 19:03:43.812828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.672 [2024-07-20 19:03:43.813057] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.672 [2024-07-20 19:03:43.813078] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.672 [2024-07-20 19:03:43.813093] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.672 [2024-07-20 19:03:43.816356] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.672 [2024-07-20 19:03:43.825740] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.672 [2024-07-20 19:03:43.826236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.672 [2024-07-20 19:03:43.826265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.672 [2024-07-20 19:03:43.826281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.672 [2024-07-20 19:03:43.826497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.672 [2024-07-20 19:03:43.826716] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.672 [2024-07-20 19:03:43.826746] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.672 [2024-07-20 19:03:43.826761] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.672 [2024-07-20 19:03:43.830033] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.672 [2024-07-20 19:03:43.839283] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.672 [2024-07-20 19:03:43.840006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.672 [2024-07-20 19:03:43.840050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.672 [2024-07-20 19:03:43.840071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.672 [2024-07-20 19:03:43.840296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.672 [2024-07-20 19:03:43.840520] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.672 [2024-07-20 19:03:43.840542] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.672 [2024-07-20 19:03:43.840559] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.672 [2024-07-20 19:03:43.843809] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.672 Malloc0 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:33.672 [2024-07-20 19:03:43.852978] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.672 [2024-07-20 19:03:43.853440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.672 [2024-07-20 19:03:43.853471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252f1e0 with addr=10.0.0.2, port=4420 00:33:33.672 [2024-07-20 19:03:43.853488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f1e0 is same with the state(5) to be set 00:33:33.672 [2024-07-20 19:03:43.853706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252f1e0 (9): Bad file descriptor 00:33:33.672 [2024-07-20 19:03:43.853939] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.672 [2024-07-20 19:03:43.853962] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.672 [2024-07-20 19:03:43.853977] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.672 [2024-07-20 19:03:43.857246] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:33.672 [2024-07-20 19:03:43.864034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.672 [2024-07-20 19:03:43.866470] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.672 19:03:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1539503 00:33:33.672 [2024-07-20 19:03:43.941385] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:43.687 00:33:43.687 Latency(us) 00:33:43.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.687 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:43.687 Verification LBA range: start 0x0 length 0x4000 00:33:43.687 Nvme1n1 : 15.01 7050.45 27.54 8688.54 0.00 8107.25 1098.33 17670.45 00:33:43.687 =================================================================================================================== 00:33:43.687 Total : 7050.45 27.54 8688.54 0.00 8107.25 1098.33 17670.45 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:43.687 rmmod nvme_tcp 00:33:43.687 rmmod nvme_fabrics 00:33:43.687 rmmod nvme_keyring 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1540169 ']' 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1540169 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 1540169 ']' 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 1540169 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1540169 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1540169' 00:33:43.687 killing process with pid 1540169 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 1540169 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 1540169 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:43.687 19:03:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.586 19:03:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:45.586 00:33:45.586 real 0m22.295s 00:33:45.586 user 1m0.089s 00:33:45.586 sys 0m4.054s 00:33:45.586 19:03:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:45.586 19:03:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:45.586 ************************************ 00:33:45.586 END TEST nvmf_bdevperf 00:33:45.586 ************************************ 00:33:45.586 19:03:55 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:45.586 19:03:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:45.586 19:03:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:45.586 19:03:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.586 ************************************ 00:33:45.586 START TEST nvmf_target_disconnect 00:33:45.586 ************************************ 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:45.586 * Looking for test storage... 00:33:45.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:45.586 19:03:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:47.487 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:47.487 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:47.487 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:47.487 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:47.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:47.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:33:47.487 00:33:47.487 --- 10.0.0.2 ping statistics --- 00:33:47.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.487 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:47.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:47.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:33:47.487 00:33:47.487 --- 10.0.0.1 ping statistics --- 00:33:47.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.487 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:47.487 ************************************ 00:33:47.487 START TEST nvmf_target_disconnect_tc1 00:33:47.487 ************************************ 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:47.487 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.487 [2024-07-20 19:03:57.791909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-20 19:03:57.791981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2048740 with addr=10.0.0.2, port=4420 00:33:47.487 [2024-07-20 19:03:57.792015] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:47.487 [2024-07-20 19:03:57.792042] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:47.487 [2024-07-20 19:03:57.792058] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:47.487 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:47.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:47.487 Initializing NVMe Controllers 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:47.487 00:33:47.487 real 0m0.097s 00:33:47.487 user 0m0.040s 00:33:47.487 sys 0m0.056s 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:47.487 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:47.487 ************************************ 00:33:47.487 END TEST nvmf_target_disconnect_tc1 00:33:47.487 ************************************ 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:47.745 ************************************ 00:33:47.745 START TEST nvmf_target_disconnect_tc2 00:33:47.745 ************************************ 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1543316 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1543316 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1543316 ']' 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:47.745 19:03:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:47.745 [2024-07-20 19:03:57.903782] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:47.745 [2024-07-20 19:03:57.903886] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.745 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.745 [2024-07-20 19:03:57.969789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:47.745 [2024-07-20 19:03:58.060648] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.745 [2024-07-20 19:03:58.060706] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.745 [2024-07-20 19:03:58.060719] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.745 [2024-07-20 19:03:58.060730] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.745 [2024-07-20 19:03:58.060739] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.745 [2024-07-20 19:03:58.060823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:47.745 [2024-07-20 19:03:58.060889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:47.745 [2024-07-20 19:03:58.060954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:47.745 [2024-07-20 19:03:58.060957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:48.002 Malloc0 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:48.002 [2024-07-20 19:03:58.245529] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:48.002 [2024-07-20 19:03:58.273826] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1543338 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:48.002 19:03:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:48.259 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.168 19:04:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1543316 00:33:50.168 19:04:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Write completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Write completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Write completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Write completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Write completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Write completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Write completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Write completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Write completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Write completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Write completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 [2024-07-20 19:04:00.300439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Write completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Write completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Write completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Write completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.168 Read completed with error (sct=0, sc=8) 00:33:50.168 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 [2024-07-20 19:04:00.300824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Read completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 Write completed with error (sct=0, sc=8) 00:33:50.169 starting I/O failed 00:33:50.169 [2024-07-20 19:04:00.301166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.169 [2024-07-20 19:04:00.301529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.301560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.301842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.301870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.302095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.302122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.302354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.302381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.302618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.302645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.302889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.302916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.303149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.303176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.303460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.303491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.303834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.303864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.304078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.304105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.304502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.304533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.304826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.304876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.305091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.305117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.305392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.305418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.305760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.305787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.306022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.306060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.306563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.306615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.306899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.306928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.307162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.307189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.307466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.307492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.307826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.307869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.308080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.308122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.308417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.308447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.308898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.169 [2024-07-20 19:04:00.308925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.169 qpair failed and we were unable to recover it. 00:33:50.169 [2024-07-20 19:04:00.309138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.309165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.309424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.309451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.309743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.309770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.310012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.310039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.310413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.310461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.310901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.310928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.311165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.311193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.311681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.311728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.312000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.312027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.312392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.312421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.312663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.312693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.312958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.312985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.313340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.313366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.313723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.313765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.314022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.314048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.314272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.314314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.314614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.314667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.315013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.315041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.315310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.315336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.315826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.315878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.316120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.316146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.316462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.316502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.316919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.316947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.317166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.317193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.317426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.317466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.317858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.317885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.318173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.318202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.318518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.318545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.318852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.318881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.319123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.319153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.319423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.319449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.319741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.319768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.320051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.320079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.320320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.320346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.320611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.320638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.320896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.320925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.321203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.321233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.321587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.321614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.321933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.321960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.322185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.322227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.170 [2024-07-20 19:04:00.322581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.170 [2024-07-20 19:04:00.322611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.170 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.322914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.322944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.323199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.323224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.323461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.323486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.323813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.323857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.324101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.324128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.324419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.324449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.324760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.324814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.325070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.325112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.325378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.325404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.325815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.325858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.326117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.326143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.326435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.326465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.326853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.326880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.327134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.327176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.327431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.327457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.327791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.327836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.328096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.328123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.328447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.328476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.328816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.328858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.329148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.329176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.329447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.329473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.329744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.329770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.330062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.330090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.330402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.330432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.330692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.330719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.331000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.331028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.331351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.331377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.331581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.331608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.331864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.331892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.332150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.332182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.332632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.332684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.332931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.332959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.333262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.333289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.333556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.333583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.333868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.333895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.334174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.334204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.334550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.334580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.334905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.334947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.335233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.335263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.335617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.335644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.171 [2024-07-20 19:04:00.335918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.171 [2024-07-20 19:04:00.335945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.171 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.336206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.336238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.336522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.336548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.336832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.336875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.337167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.337197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.337500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.337541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.337813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.337840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.338097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.338128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.338386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.338416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.338683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.338710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.339013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.339044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.339305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.339331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.339605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.339632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.339936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.339967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.340262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.340291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.340656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.340682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.340993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.341021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.341298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.341327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.341658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.341705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.342001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.342032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.342322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.342351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.342707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.342734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.343008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.343037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.343402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.343428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.343740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.343766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.344194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.344240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.344539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.344568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.344910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.344938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.345253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.345283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.345810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.345861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.346286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.346342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.346603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.346636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.346930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.346961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.347255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.347283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.347593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.347623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.172 qpair failed and we were unable to recover it. 00:33:50.172 [2024-07-20 19:04:00.347906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.172 [2024-07-20 19:04:00.347933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.173 qpair failed and we were unable to recover it. 00:33:50.173 [2024-07-20 19:04:00.348190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.173 [2024-07-20 19:04:00.348216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.173 qpair failed and we were unable to recover it. 00:33:50.173 [2024-07-20 19:04:00.348521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.173 [2024-07-20 19:04:00.348552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.173 qpair failed and we were unable to recover it. 00:33:50.173 [2024-07-20 19:04:00.348819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.173 [2024-07-20 19:04:00.348851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.173 qpair failed and we were unable to recover it. 00:33:50.173 [2024-07-20 19:04:00.349132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.173 [2024-07-20 19:04:00.349159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.173 qpair failed and we were unable to recover it. 00:33:50.173 [2024-07-20 19:04:00.349477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.173 [2024-07-20 19:04:00.349506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.173 qpair failed and we were unable to recover it. 00:33:50.173 [2024-07-20 19:04:00.349805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.173 [2024-07-20 19:04:00.349835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.173 qpair failed and we were unable to recover it. 00:33:50.173 [2024-07-20 19:04:00.350118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.173 [2024-07-20 19:04:00.350145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.173 qpair failed and we were unable to recover it. 00:33:50.173 [2024-07-20 19:04:00.350562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.173 [2024-07-20 19:04:00.350619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.173 qpair failed and we were unable to recover it. 00:33:50.173 [2024-07-20 19:04:00.350898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.173 [2024-07-20 19:04:00.350929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.173 qpair failed and we were unable to recover it. 00:33:50.173 [2024-07-20 19:04:00.351201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.173 [2024-07-20 19:04:00.351227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.173 qpair failed and we were unable to recover it. 00:33:50.173 [2024-07-20 19:04:00.351570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.173 [2024-07-20 19:04:00.351596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.173 qpair failed and we were unable to recover it. 00:33:50.173 [2024-07-20 19:04:00.351848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.173 [2024-07-20 19:04:00.351876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.173 qpair failed and we were unable to recover it. 00:33:50.173 [2024-07-20 19:04:00.352174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.173 [2024-07-20 19:04:00.352217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.174 qpair failed and we were unable to recover it. 00:33:50.174 [2024-07-20 19:04:00.352486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.174 [2024-07-20 19:04:00.352516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.174 qpair failed and we were unable to recover it. 00:33:50.174 [2024-07-20 19:04:00.352877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.174 [2024-07-20 19:04:00.352904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.174 qpair failed and we were unable to recover it. 00:33:50.174 [2024-07-20 19:04:00.353373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.174 [2024-07-20 19:04:00.353420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.174 qpair failed and we were unable to recover it. 00:33:50.174 [2024-07-20 19:04:00.353724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.174 [2024-07-20 19:04:00.353753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.174 qpair failed and we were unable to recover it. 00:33:50.174 [2024-07-20 19:04:00.354217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.174 [2024-07-20 19:04:00.354263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.174 qpair failed and we were unable to recover it. 00:33:50.174 [2024-07-20 19:04:00.354539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.174 [2024-07-20 19:04:00.354568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.174 qpair failed and we were unable to recover it. 00:33:50.174 [2024-07-20 19:04:00.354820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.174 [2024-07-20 19:04:00.354848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.174 qpair failed and we were unable to recover it. 00:33:50.174 [2024-07-20 19:04:00.355128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.174 [2024-07-20 19:04:00.355157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.174 qpair failed and we were unable to recover it. 00:33:50.174 [2024-07-20 19:04:00.355530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.174 [2024-07-20 19:04:00.355579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.174 qpair failed and we were unable to recover it. 00:33:50.174 [2024-07-20 19:04:00.355882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.174 [2024-07-20 19:04:00.355913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.174 qpair failed and we were unable to recover it. 00:33:50.174 [2024-07-20 19:04:00.356181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.174 [2024-07-20 19:04:00.356210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.174 qpair failed and we were unable to recover it. 00:33:50.174 [2024-07-20 19:04:00.356608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.174 [2024-07-20 19:04:00.356637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.174 qpair failed and we were unable to recover it. 00:33:50.174 [2024-07-20 19:04:00.356902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.174 [2024-07-20 19:04:00.356928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.174 qpair failed and we were unable to recover it. 00:33:50.174 [2024-07-20 19:04:00.357154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.174 [2024-07-20 19:04:00.357180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.175 qpair failed and we were unable to recover it. 00:33:50.175 [2024-07-20 19:04:00.357477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.175 [2024-07-20 19:04:00.357503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.175 qpair failed and we were unable to recover it. 00:33:50.175 [2024-07-20 19:04:00.357806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.175 [2024-07-20 19:04:00.357839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.175 qpair failed and we were unable to recover it. 00:33:50.175 [2024-07-20 19:04:00.358129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.175 [2024-07-20 19:04:00.358156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.175 qpair failed and we were unable to recover it. 00:33:50.175 [2024-07-20 19:04:00.358457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.175 [2024-07-20 19:04:00.358483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.175 qpair failed and we were unable to recover it. 00:33:50.175 [2024-07-20 19:04:00.358780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.175 [2024-07-20 19:04:00.358814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.175 qpair failed and we were unable to recover it. 00:33:50.175 [2024-07-20 19:04:00.359120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.175 [2024-07-20 19:04:00.359150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.175 qpair failed and we were unable to recover it. 00:33:50.175 [2024-07-20 19:04:00.359409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.175 [2024-07-20 19:04:00.359435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.175 qpair failed and we were unable to recover it. 00:33:50.175 [2024-07-20 19:04:00.359689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.175 [2024-07-20 19:04:00.359721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.175 qpair failed and we were unable to recover it. 00:33:50.175 [2024-07-20 19:04:00.360015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.175 [2024-07-20 19:04:00.360043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.175 qpair failed and we were unable to recover it. 00:33:50.176 [2024-07-20 19:04:00.360325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.176 [2024-07-20 19:04:00.360352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.176 qpair failed and we were unable to recover it. 00:33:50.176 [2024-07-20 19:04:00.360865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.176 [2024-07-20 19:04:00.360896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.176 qpair failed and we were unable to recover it. 00:33:50.176 [2024-07-20 19:04:00.361271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.176 [2024-07-20 19:04:00.361329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.176 qpair failed and we were unable to recover it. 00:33:50.176 [2024-07-20 19:04:00.361617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.176 [2024-07-20 19:04:00.361647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.176 qpair failed and we were unable to recover it. 00:33:50.176 [2024-07-20 19:04:00.361935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.176 [2024-07-20 19:04:00.361963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.176 qpair failed and we were unable to recover it. 00:33:50.176 [2024-07-20 19:04:00.362277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.176 [2024-07-20 19:04:00.362320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.176 qpair failed and we were unable to recover it. 00:33:50.176 [2024-07-20 19:04:00.362557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.176 [2024-07-20 19:04:00.362583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.176 qpair failed and we were unable to recover it. 00:33:50.176 [2024-07-20 19:04:00.362872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.176 [2024-07-20 19:04:00.362900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.176 qpair failed and we were unable to recover it. 00:33:50.176 [2024-07-20 19:04:00.363223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.176 [2024-07-20 19:04:00.363265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.176 qpair failed and we were unable to recover it. 00:33:50.177 [2024-07-20 19:04:00.363543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.177 [2024-07-20 19:04:00.363569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.177 qpair failed and we were unable to recover it. 00:33:50.177 [2024-07-20 19:04:00.363854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.177 [2024-07-20 19:04:00.363884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.177 qpair failed and we were unable to recover it. 00:33:50.177 [2024-07-20 19:04:00.364159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.177 [2024-07-20 19:04:00.364185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.177 qpair failed and we were unable to recover it. 00:33:50.177 [2024-07-20 19:04:00.364529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.177 [2024-07-20 19:04:00.364555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.177 qpair failed and we were unable to recover it. 00:33:50.177 [2024-07-20 19:04:00.364833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.177 [2024-07-20 19:04:00.364860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.177 qpair failed and we were unable to recover it. 00:33:50.177 [2024-07-20 19:04:00.365212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.177 [2024-07-20 19:04:00.365252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.177 qpair failed and we were unable to recover it. 00:33:50.177 [2024-07-20 19:04:00.365569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.177 [2024-07-20 19:04:00.365616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.177 qpair failed and we were unable to recover it. 00:33:50.177 [2024-07-20 19:04:00.365907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.177 [2024-07-20 19:04:00.365936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.177 qpair failed and we were unable to recover it. 00:33:50.177 [2024-07-20 19:04:00.366214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.177 [2024-07-20 19:04:00.366241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.177 qpair failed and we were unable to recover it. 00:33:50.177 [2024-07-20 19:04:00.366586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.177 [2024-07-20 19:04:00.366612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.178 qpair failed and we were unable to recover it. 00:33:50.178 [2024-07-20 19:04:00.366934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.178 [2024-07-20 19:04:00.366965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.178 qpair failed and we were unable to recover it. 00:33:50.178 [2024-07-20 19:04:00.367282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.178 [2024-07-20 19:04:00.367324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.178 qpair failed and we were unable to recover it. 00:33:50.178 [2024-07-20 19:04:00.367689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.178 [2024-07-20 19:04:00.367715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.178 qpair failed and we were unable to recover it. 00:33:50.178 [2024-07-20 19:04:00.368096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.178 [2024-07-20 19:04:00.368155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.178 qpair failed and we were unable to recover it. 00:33:50.178 [2024-07-20 19:04:00.368462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.178 [2024-07-20 19:04:00.368494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.178 qpair failed and we were unable to recover it. 00:33:50.178 [2024-07-20 19:04:00.368783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.178 [2024-07-20 19:04:00.368835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.178 qpair failed and we were unable to recover it. 00:33:50.178 [2024-07-20 19:04:00.369137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.178 [2024-07-20 19:04:00.369165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.178 qpair failed and we were unable to recover it. 00:33:50.178 [2024-07-20 19:04:00.369470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.178 [2024-07-20 19:04:00.369500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.178 qpair failed and we were unable to recover it. 00:33:50.178 [2024-07-20 19:04:00.369789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.178 [2024-07-20 19:04:00.369823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.178 qpair failed and we were unable to recover it. 00:33:50.179 [2024-07-20 19:04:00.370086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.179 [2024-07-20 19:04:00.370112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.179 qpair failed and we were unable to recover it. 00:33:50.179 [2024-07-20 19:04:00.370504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.179 [2024-07-20 19:04:00.370529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.179 qpair failed and we were unable to recover it. 00:33:50.179 [2024-07-20 19:04:00.370873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.179 [2024-07-20 19:04:00.370919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.179 qpair failed and we were unable to recover it. 00:33:50.179 [2024-07-20 19:04:00.371190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.179 [2024-07-20 19:04:00.371216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.179 qpair failed and we were unable to recover it. 00:33:50.179 [2024-07-20 19:04:00.371555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.179 [2024-07-20 19:04:00.371584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.179 qpair failed and we were unable to recover it. 00:33:50.179 [2024-07-20 19:04:00.371850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.179 [2024-07-20 19:04:00.371883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.179 qpair failed and we were unable to recover it. 00:33:50.179 [2024-07-20 19:04:00.372212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.179 [2024-07-20 19:04:00.372254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.179 qpair failed and we were unable to recover it. 00:33:50.179 [2024-07-20 19:04:00.372602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.179 [2024-07-20 19:04:00.372632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.179 qpair failed and we were unable to recover it. 00:33:50.179 [2024-07-20 19:04:00.372908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.179 [2024-07-20 19:04:00.372936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.179 qpair failed and we were unable to recover it. 00:33:50.179 [2024-07-20 19:04:00.373169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.179 [2024-07-20 19:04:00.373196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.180 qpair failed and we were unable to recover it. 00:33:50.180 [2024-07-20 19:04:00.373433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.180 [2024-07-20 19:04:00.373459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.180 qpair failed and we were unable to recover it. 00:33:50.180 [2024-07-20 19:04:00.373816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.180 [2024-07-20 19:04:00.373845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.180 qpair failed and we were unable to recover it. 00:33:50.180 [2024-07-20 19:04:00.374132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.180 [2024-07-20 19:04:00.374159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.180 qpair failed and we were unable to recover it. 00:33:50.180 [2024-07-20 19:04:00.374485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.180 [2024-07-20 19:04:00.374516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.180 qpair failed and we were unable to recover it. 00:33:50.180 [2024-07-20 19:04:00.374916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.180 [2024-07-20 19:04:00.374942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.180 qpair failed and we were unable to recover it. 00:33:50.180 [2024-07-20 19:04:00.375491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.180 [2024-07-20 19:04:00.375537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.180 qpair failed and we were unable to recover it. 00:33:50.180 [2024-07-20 19:04:00.375840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.180 [2024-07-20 19:04:00.375878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.181 qpair failed and we were unable to recover it. 00:33:50.181 [2024-07-20 19:04:00.376195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.181 [2024-07-20 19:04:00.376237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.181 qpair failed and we were unable to recover it. 00:33:50.181 [2024-07-20 19:04:00.376531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.181 [2024-07-20 19:04:00.376558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.181 qpair failed and we were unable to recover it. 00:33:50.181 [2024-07-20 19:04:00.376894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.181 [2024-07-20 19:04:00.376937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.181 qpair failed and we were unable to recover it. 00:33:50.181 [2024-07-20 19:04:00.377206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.181 [2024-07-20 19:04:00.377234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.181 qpair failed and we were unable to recover it. 00:33:50.181 [2024-07-20 19:04:00.377545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.181 [2024-07-20 19:04:00.377588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.181 qpair failed and we were unable to recover it. 00:33:50.181 [2024-07-20 19:04:00.377835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.181 [2024-07-20 19:04:00.377865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.181 qpair failed and we were unable to recover it. 00:33:50.181 [2024-07-20 19:04:00.378115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.181 [2024-07-20 19:04:00.378141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.181 qpair failed and we were unable to recover it. 00:33:50.181 [2024-07-20 19:04:00.378444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.181 [2024-07-20 19:04:00.378470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.181 qpair failed and we were unable to recover it. 00:33:50.181 [2024-07-20 19:04:00.378775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.181 [2024-07-20 19:04:00.378814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.182 qpair failed and we were unable to recover it. 00:33:50.182 [2024-07-20 19:04:00.379093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.182 [2024-07-20 19:04:00.379122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.182 qpair failed and we were unable to recover it. 00:33:50.182 [2024-07-20 19:04:00.379484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.182 [2024-07-20 19:04:00.379514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.182 qpair failed and we were unable to recover it. 00:33:50.182 [2024-07-20 19:04:00.379805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.182 [2024-07-20 19:04:00.379833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.182 qpair failed and we were unable to recover it. 00:33:50.182 [2024-07-20 19:04:00.380053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.182 [2024-07-20 19:04:00.380095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.182 qpair failed and we were unable to recover it. 00:33:50.182 [2024-07-20 19:04:00.380360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.182 [2024-07-20 19:04:00.380386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.182 qpair failed and we were unable to recover it. 00:33:50.182 [2024-07-20 19:04:00.380719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.182 [2024-07-20 19:04:00.380749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.182 qpair failed and we were unable to recover it. 00:33:50.182 [2024-07-20 19:04:00.381056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.182 [2024-07-20 19:04:00.381107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.182 qpair failed and we were unable to recover it. 00:33:50.182 [2024-07-20 19:04:00.381415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.182 [2024-07-20 19:04:00.381444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.183 qpair failed and we were unable to recover it. 00:33:50.183 [2024-07-20 19:04:00.381720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.183 [2024-07-20 19:04:00.381747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.183 qpair failed and we were unable to recover it. 00:33:50.183 [2024-07-20 19:04:00.382045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.183 [2024-07-20 19:04:00.382073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.183 qpair failed and we were unable to recover it. 00:33:50.183 [2024-07-20 19:04:00.382330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.183 [2024-07-20 19:04:00.382360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.183 qpair failed and we were unable to recover it. 00:33:50.183 [2024-07-20 19:04:00.382624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.183 [2024-07-20 19:04:00.382656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.183 qpair failed and we were unable to recover it. 00:33:50.183 [2024-07-20 19:04:00.382960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.183 [2024-07-20 19:04:00.382988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.183 qpair failed and we were unable to recover it. 00:33:50.183 [2024-07-20 19:04:00.383417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.183 [2024-07-20 19:04:00.383465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.183 qpair failed and we were unable to recover it. 00:33:50.183 [2024-07-20 19:04:00.383738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.183 [2024-07-20 19:04:00.383765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.183 qpair failed and we were unable to recover it. 00:33:50.183 [2024-07-20 19:04:00.384031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.183 [2024-07-20 19:04:00.384060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.183 qpair failed and we were unable to recover it. 00:33:50.183 [2024-07-20 19:04:00.384400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.183 [2024-07-20 19:04:00.384441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.183 qpair failed and we were unable to recover it. 00:33:50.184 [2024-07-20 19:04:00.384753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.184 [2024-07-20 19:04:00.384802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.184 qpair failed and we were unable to recover it. 00:33:50.184 [2024-07-20 19:04:00.385051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.184 [2024-07-20 19:04:00.385094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.184 qpair failed and we were unable to recover it. 00:33:50.184 [2024-07-20 19:04:00.385373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.184 [2024-07-20 19:04:00.385415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.184 qpair failed and we were unable to recover it. 00:33:50.184 [2024-07-20 19:04:00.385726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.184 [2024-07-20 19:04:00.385753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.184 qpair failed and we were unable to recover it. 00:33:50.184 [2024-07-20 19:04:00.386207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.184 [2024-07-20 19:04:00.386263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.184 qpair failed and we were unable to recover it. 00:33:50.184 [2024-07-20 19:04:00.386552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.184 [2024-07-20 19:04:00.386586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.184 qpair failed and we were unable to recover it. 00:33:50.184 [2024-07-20 19:04:00.386878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.184 [2024-07-20 19:04:00.386907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.184 qpair failed and we were unable to recover it. 00:33:50.184 [2024-07-20 19:04:00.387176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.184 [2024-07-20 19:04:00.387203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.184 qpair failed and we were unable to recover it. 00:33:50.184 [2024-07-20 19:04:00.387560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.184 [2024-07-20 19:04:00.387586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.184 qpair failed and we were unable to recover it. 00:33:50.184 [2024-07-20 19:04:00.387864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.185 [2024-07-20 19:04:00.387896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.185 qpair failed and we were unable to recover it. 00:33:50.185 [2024-07-20 19:04:00.388296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.185 [2024-07-20 19:04:00.388335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.185 qpair failed and we were unable to recover it. 00:33:50.185 [2024-07-20 19:04:00.388652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.185 [2024-07-20 19:04:00.388695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.185 qpair failed and we were unable to recover it. 00:33:50.185 [2024-07-20 19:04:00.388972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.185 [2024-07-20 19:04:00.389000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.185 qpair failed and we were unable to recover it. 00:33:50.185 [2024-07-20 19:04:00.389283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.185 [2024-07-20 19:04:00.389312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.185 qpair failed and we were unable to recover it. 00:33:50.185 [2024-07-20 19:04:00.389612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.185 [2024-07-20 19:04:00.389642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.185 qpair failed and we were unable to recover it. 00:33:50.185 [2024-07-20 19:04:00.389939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.185 [2024-07-20 19:04:00.389967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.185 qpair failed and we were unable to recover it. 00:33:50.185 [2024-07-20 19:04:00.390267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.185 [2024-07-20 19:04:00.390295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.185 qpair failed and we were unable to recover it. 00:33:50.186 [2024-07-20 19:04:00.390617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.186 [2024-07-20 19:04:00.390648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.186 qpair failed and we were unable to recover it. 00:33:50.186 [2024-07-20 19:04:00.390914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.186 [2024-07-20 19:04:00.390945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.186 qpair failed and we were unable to recover it. 00:33:50.186 [2024-07-20 19:04:00.391207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.186 [2024-07-20 19:04:00.391234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.186 qpair failed and we were unable to recover it. 00:33:50.186 [2024-07-20 19:04:00.391494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.186 [2024-07-20 19:04:00.391526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.186 qpair failed and we were unable to recover it. 00:33:50.186 [2024-07-20 19:04:00.391819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.186 [2024-07-20 19:04:00.391850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.186 qpair failed and we were unable to recover it. 00:33:50.186 [2024-07-20 19:04:00.392159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.186 [2024-07-20 19:04:00.392201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.186 qpair failed and we were unable to recover it. 00:33:50.186 [2024-07-20 19:04:00.392479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.186 [2024-07-20 19:04:00.392509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.187 qpair failed and we were unable to recover it. 00:33:50.187 [2024-07-20 19:04:00.392772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.187 [2024-07-20 19:04:00.392815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.187 qpair failed and we were unable to recover it. 00:33:50.187 [2024-07-20 19:04:00.393102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.187 [2024-07-20 19:04:00.393130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.187 qpair failed and we were unable to recover it. 00:33:50.187 [2024-07-20 19:04:00.393395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.187 [2024-07-20 19:04:00.393425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.187 qpair failed and we were unable to recover it. 00:33:50.187 [2024-07-20 19:04:00.393730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.187 [2024-07-20 19:04:00.393770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.187 qpair failed and we were unable to recover it. 00:33:50.187 [2024-07-20 19:04:00.394078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.187 [2024-07-20 19:04:00.394106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.187 qpair failed and we were unable to recover it. 00:33:50.187 [2024-07-20 19:04:00.394444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.187 [2024-07-20 19:04:00.394474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.187 qpair failed and we were unable to recover it. 00:33:50.187 [2024-07-20 19:04:00.394742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.188 [2024-07-20 19:04:00.394774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-20 19:04:00.395046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.188 [2024-07-20 19:04:00.395073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-20 19:04:00.395374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.188 [2024-07-20 19:04:00.395403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-20 19:04:00.395751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.188 [2024-07-20 19:04:00.395777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-20 19:04:00.396080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.188 [2024-07-20 19:04:00.396107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-20 19:04:00.396389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.188 [2024-07-20 19:04:00.396419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-20 19:04:00.396700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.188 [2024-07-20 19:04:00.396726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-20 19:04:00.396996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-20 19:04:00.397024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-20 19:04:00.397349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-20 19:04:00.397379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-20 19:04:00.397715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-20 19:04:00.397755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-20 19:04:00.398194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-20 19:04:00.398255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-20 19:04:00.398569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-20 19:04:00.398612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-20 19:04:00.398915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-20 19:04:00.398946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-20 19:04:00.399407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-20 19:04:00.399470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-20 19:04:00.399766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-20 19:04:00.399805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-20 19:04:00.400079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-20 19:04:00.400107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-20 19:04:00.400460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-20 19:04:00.400500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-20 19:04:00.400759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-20 19:04:00.400785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-20 19:04:00.401246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-20 19:04:00.401308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-20 19:04:00.401631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-20 19:04:00.401677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-20 19:04:00.401977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-20 19:04:00.402007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-20 19:04:00.402292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-20 19:04:00.402322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-20 19:04:00.402600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-20 19:04:00.402627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-20 19:04:00.402949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-20 19:04:00.402991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-20 19:04:00.403201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-20 19:04:00.403226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-20 19:04:00.403486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-20 19:04:00.403511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-20 19:04:00.403867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-20 19:04:00.403894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-20 19:04:00.404184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-20 19:04:00.404217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-20 19:04:00.404481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-20 19:04:00.404507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-20 19:04:00.404859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-20 19:04:00.404890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-20 19:04:00.405171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-20 19:04:00.405197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-20 19:04:00.405512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-20 19:04:00.405554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-20 19:04:00.405830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-20 19:04:00.405862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-20 19:04:00.406127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-20 19:04:00.406157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-20 19:04:00.406491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-20 19:04:00.406531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-20 19:04:00.406753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-20 19:04:00.406779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-20 19:04:00.407071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-20 19:04:00.407102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-20 19:04:00.407380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-20 19:04:00.407408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-20 19:04:00.407765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-20 19:04:00.407791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-20 19:04:00.408098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-20 19:04:00.408129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-20 19:04:00.408440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-20 19:04:00.408482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-20 19:04:00.408786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-20 19:04:00.408820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-20 19:04:00.409113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-20 19:04:00.409140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-20 19:04:00.409640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-20 19:04:00.409692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-20 19:04:00.409972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-20 19:04:00.410000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-20 19:04:00.410352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-20 19:04:00.410380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-20 19:04:00.410656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-20 19:04:00.410683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-20 19:04:00.410959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-20 19:04:00.410989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-20 19:04:00.411334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-20 19:04:00.411375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-20 19:04:00.411743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-20 19:04:00.411769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-20 19:04:00.412022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-20 19:04:00.412050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-20 19:04:00.412335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-20 19:04:00.412377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-20 19:04:00.412761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-20 19:04:00.412791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-20 19:04:00.413061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-20 19:04:00.413089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-20 19:04:00.413369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-20 19:04:00.413399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-20 19:04:00.413759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-20 19:04:00.413785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-20 19:04:00.414091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-20 19:04:00.414118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-20 19:04:00.414473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-20 19:04:00.414499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-20 19:04:00.414804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-20 19:04:00.414832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-20 19:04:00.415153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-20 19:04:00.415184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-20 19:04:00.415484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-20 19:04:00.415514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-20 19:04:00.415819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-20 19:04:00.415847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-20 19:04:00.416214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-20 19:04:00.416260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-20 19:04:00.416569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-20 19:04:00.416602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-20 19:04:00.416899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-20 19:04:00.416927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-20 19:04:00.417167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-20 19:04:00.417193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-20 19:04:00.417442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-20 19:04:00.417470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-20 19:04:00.417732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-20 19:04:00.417760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-20 19:04:00.418027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-20 19:04:00.418063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-20 19:04:00.418335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-20 19:04:00.418365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-20 19:04:00.418616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-20 19:04:00.418642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-20 19:04:00.418975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-20 19:04:00.419018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-20 19:04:00.419388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-20 19:04:00.419429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-20 19:04:00.419815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-20 19:04:00.419846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.420138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.420168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.420460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.420487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.420738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.420764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.421015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.421045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.421392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.421417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.421657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.421683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.421915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.421943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.422218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.422259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.422596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.422637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.422883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.422913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.423272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.423299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.423626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.423667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.423970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.424002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.424298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.424328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.424623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.424648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.424937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.424965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-20 19:04:00.425211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-20 19:04:00.425241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.425526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.425553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.425859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.425889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.426181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.426211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.426510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.426554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.426873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.426905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.427199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.427226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.427581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.427607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.427966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.427993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.428278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.428305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.428625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.428674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.428970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.428997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.429335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.429361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.429605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.429632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.429886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.429914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.430207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.430236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-20 19:04:00.430494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-20 19:04:00.430520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-20 19:04:00.430785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-20 19:04:00.430821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-20 19:04:00.431114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-20 19:04:00.431144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-20 19:04:00.431461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-20 19:04:00.431509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-20 19:04:00.431811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-20 19:04:00.431842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-20 19:04:00.432133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-20 19:04:00.432163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-20 19:04:00.432451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-20 19:04:00.432477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-20 19:04:00.432741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-20 19:04:00.432767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-20 19:04:00.433061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-20 19:04:00.433091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-20 19:04:00.433353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-20 19:04:00.433380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-20 19:04:00.433680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-20 19:04:00.433710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-20 19:04:00.433971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-20 19:04:00.434001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-20 19:04:00.434280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-20 19:04:00.434306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-20 19:04:00.434576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-20 19:04:00.434602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-20 19:04:00.434875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-20 19:04:00.434906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-20 19:04:00.435336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-20 19:04:00.435382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-20 19:04:00.435657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-20 19:04:00.435686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-20 19:04:00.435990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-20 19:04:00.436019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-20 19:04:00.436252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-20 19:04:00.436280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-20 19:04:00.436529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-20 19:04:00.436560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-20 19:04:00.436854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-20 19:04:00.436883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-20 19:04:00.437184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-20 19:04:00.437211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-20 19:04:00.437531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-20 19:04:00.437557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-20 19:04:00.437809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-20 19:04:00.437841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-20 19:04:00.438089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-20 19:04:00.438116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-20 19:04:00.438445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-20 19:04:00.438487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-20 19:04:00.438783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-20 19:04:00.438818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-20 19:04:00.439086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-20 19:04:00.439113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-20 19:04:00.439403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-20 19:04:00.439432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-20 19:04:00.439657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-20 19:04:00.439684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-20 19:04:00.439941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-20 19:04:00.439973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-20 19:04:00.440200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-20 19:04:00.440246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-20 19:04:00.440611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-20 19:04:00.440654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-20 19:04:00.440927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-20 19:04:00.440955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-20 19:04:00.441236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-20 19:04:00.441266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-20 19:04:00.441536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-20 19:04:00.441566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-20 19:04:00.441887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-20 19:04:00.441914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-20 19:04:00.442226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-20 19:04:00.442256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-20 19:04:00.442524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-20 19:04:00.442554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-20 19:04:00.442882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-20 19:04:00.442908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-20 19:04:00.443189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-20 19:04:00.443216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-20 19:04:00.443498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-20 19:04:00.443528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-20 19:04:00.443774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-20 19:04:00.443821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-20 19:04:00.444103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-20 19:04:00.444135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-20 19:04:00.444435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-20 19:04:00.444463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-20 19:04:00.444737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-20 19:04:00.444764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-20 19:04:00.445156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-20 19:04:00.445203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-20 19:04:00.445494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-20 19:04:00.445526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-20 19:04:00.445820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-20 19:04:00.445867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-20 19:04:00.446135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-20 19:04:00.446165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-20 19:04:00.446420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-20 19:04:00.446446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-20 19:04:00.446710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-20 19:04:00.446736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-20 19:04:00.447020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-20 19:04:00.447062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-20 19:04:00.447361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-20 19:04:00.447391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-20 19:04:00.447680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-20 19:04:00.447708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-20 19:04:00.448047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-20 19:04:00.448090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-20 19:04:00.448336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-20 19:04:00.448365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-20 19:04:00.448678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-20 19:04:00.448721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-20 19:04:00.448968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-20 19:04:00.448999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-20 19:04:00.449260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-20 19:04:00.449290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-20 19:04:00.449644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-20 19:04:00.449671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-20 19:04:00.449962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-20 19:04:00.449993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-20 19:04:00.450284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-20 19:04:00.450314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-20 19:04:00.450595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-20 19:04:00.450622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-20 19:04:00.450868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-20 19:04:00.450895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-20 19:04:00.451137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-20 19:04:00.451164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-20 19:04:00.451466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-20 19:04:00.451494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-20 19:04:00.451825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-20 19:04:00.451852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-20 19:04:00.452178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-20 19:04:00.452220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-20 19:04:00.452483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-20 19:04:00.452511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-20 19:04:00.452829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-20 19:04:00.452871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-20 19:04:00.453120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-20 19:04:00.453147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-20 19:04:00.453437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-20 19:04:00.453480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-20 19:04:00.453790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-20 19:04:00.453841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-20 19:04:00.454141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-20 19:04:00.454183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-20 19:04:00.454531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-20 19:04:00.454576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-20 19:04:00.454863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-20 19:04:00.454894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-20 19:04:00.455160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-20 19:04:00.455190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-20 19:04:00.455544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-20 19:04:00.455570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-20 19:04:00.455849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-20 19:04:00.455876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-20 19:04:00.456253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-20 19:04:00.456299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-20 19:04:00.456624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-20 19:04:00.456671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-20 19:04:00.456949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-20 19:04:00.456978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-20 19:04:00.457282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-20 19:04:00.457312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-20 19:04:00.457600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-20 19:04:00.457627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-20 19:04:00.457956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-20 19:04:00.457983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-20 19:04:00.458247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-20 19:04:00.458276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-20 19:04:00.458559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-20 19:04:00.458586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-20 19:04:00.458919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-20 19:04:00.458947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-20 19:04:00.459187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-20 19:04:00.459215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-20 19:04:00.459509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-20 19:04:00.459550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-20 19:04:00.459848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-20 19:04:00.459890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-20 19:04:00.460156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-20 19:04:00.460184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-20 19:04:00.460453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-20 19:04:00.460479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-20 19:04:00.460768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-20 19:04:00.460807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-20 19:04:00.461073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-20 19:04:00.461103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-20 19:04:00.461367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-20 19:04:00.461394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-20 19:04:00.461668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-20 19:04:00.461699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-20 19:04:00.461994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-20 19:04:00.462027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-20 19:04:00.462301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-20 19:04:00.462327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-20 19:04:00.462593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-20 19:04:00.462621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-20 19:04:00.462863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-20 19:04:00.462891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-20 19:04:00.463195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-20 19:04:00.463237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-20 19:04:00.463541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-20 19:04:00.463572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-20 19:04:00.463850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-20 19:04:00.463893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-20 19:04:00.464173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-20 19:04:00.464199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-20 19:04:00.464475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-20 19:04:00.464505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-20 19:04:00.464791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-20 19:04:00.464830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-20 19:04:00.465120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-20 19:04:00.465147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-20 19:04:00.465353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-20 19:04:00.465395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-20 19:04:00.465663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-20 19:04:00.465693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-20 19:04:00.466025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-20 19:04:00.466070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-20 19:04:00.466306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-20 19:04:00.466336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-20 19:04:00.466627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-20 19:04:00.466657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-20 19:04:00.466953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-20 19:04:00.466980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-20 19:04:00.467286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-20 19:04:00.467315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-20 19:04:00.467583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-20 19:04:00.467613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-20 19:04:00.467873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-20 19:04:00.467899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-20 19:04:00.468195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-20 19:04:00.468225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-20 19:04:00.468514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-20 19:04:00.468555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-20 19:04:00.468943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-20 19:04:00.468973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-20 19:04:00.469207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-20 19:04:00.469237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-20 19:04:00.469498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-20 19:04:00.469528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-20 19:04:00.469860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-20 19:04:00.469887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-20 19:04:00.470146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-20 19:04:00.470176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-20 19:04:00.470438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-20 19:04:00.470467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-20 19:04:00.470959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-20 19:04:00.470984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-20 19:04:00.471264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-20 19:04:00.471293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-20 19:04:00.471582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-20 19:04:00.471612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-20 19:04:00.471959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-20 19:04:00.471986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-20 19:04:00.472262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-20 19:04:00.472292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-20 19:04:00.472553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-20 19:04:00.472582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-20 19:04:00.472831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-20 19:04:00.472858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-20 19:04:00.473125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-20 19:04:00.473155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-20 19:04:00.473390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-20 19:04:00.473417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-20 19:04:00.473742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-20 19:04:00.473781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-20 19:04:00.474070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-20 19:04:00.474100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-20 19:04:00.474363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-20 19:04:00.474393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-20 19:04:00.474711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-20 19:04:00.474737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-20 19:04:00.475153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-20 19:04:00.475226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-20 19:04:00.475507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-20 19:04:00.475536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-20 19:04:00.475843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-20 19:04:00.475888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-20 19:04:00.476155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-20 19:04:00.476184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-20 19:04:00.476557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-20 19:04:00.476587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-20 19:04:00.476863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-20 19:04:00.476889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-20 19:04:00.477146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-20 19:04:00.477171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-20 19:04:00.477603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-20 19:04:00.477654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-20 19:04:00.477945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-20 19:04:00.477972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-20 19:04:00.478235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-20 19:04:00.478261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-20 19:04:00.478535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-20 19:04:00.478565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-20 19:04:00.478859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-20 19:04:00.478901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-20 19:04:00.479209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-20 19:04:00.479247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-20 19:04:00.479582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-20 19:04:00.479612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-20 19:04:00.479887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-20 19:04:00.479916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-20 19:04:00.480168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-20 19:04:00.480199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-20 19:04:00.480464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-20 19:04:00.480494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-20 19:04:00.480804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-20 19:04:00.480832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-20 19:04:00.481113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-20 19:04:00.481144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-20 19:04:00.481426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-20 19:04:00.481469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-20 19:04:00.481852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-20 19:04:00.481882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-20 19:04:00.482193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-20 19:04:00.482224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-20 19:04:00.482486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-20 19:04:00.482516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-20 19:04:00.482804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-20 19:04:00.482846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-20 19:04:00.483191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-20 19:04:00.483233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.490 [2024-07-20 19:04:00.483557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.490 [2024-07-20 19:04:00.483588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.490 qpair failed and we were unable to recover it. 00:33:50.490 [2024-07-20 19:04:00.483880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.490 [2024-07-20 19:04:00.483926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.490 qpair failed and we were unable to recover it. 00:33:50.490 [2024-07-20 19:04:00.484185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.484220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.484552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.484581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.484857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.484886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.485151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.485183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.485547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.485578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.485853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.485880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.486150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.486180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.486449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.486481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.486751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.486779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.487083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.487113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.487376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.487408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.487715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.487758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.488060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.488089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.488492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.488522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.488756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.488806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.489080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.489110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.489459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.489501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.489759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.489786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.490052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.490079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.490359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.490390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.490695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.490722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.490985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.491016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.491306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.491333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.491575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.491601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.491878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.491906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.492209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.492250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.492510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.492536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.492787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.492820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.493074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.493100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.493410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.493436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.493705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.493733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.494044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.494086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.494375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.494400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.494731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.494757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.495201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.495247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.495542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.495570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.495917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.495949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.496238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.496268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.496696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.496749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.497027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.497064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.491 [2024-07-20 19:04:00.497369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.491 [2024-07-20 19:04:00.497399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.491 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.497718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.497768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.498101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.498132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.498422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.498452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.498691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.498718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.498982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.499009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.499288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.499318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.499606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.499633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.500023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.500050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.500357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.500387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.500752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.500845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.501101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.501126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.501507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.501567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.501859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.501885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.502172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.502202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.502471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.502501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.502759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.502785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.503059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.503089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.503359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.503385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.503704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.503745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.504011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.504039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.504421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.504446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.504736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.504763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.505064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.505106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.505432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.505509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.505866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.505908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.506206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.506236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.506530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.506560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.506818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.506849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.507230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.507269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.507542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.507570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.507843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.507872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.508125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.508155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.508420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.508449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.508879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.508910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.509185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.509215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.509514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.509556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.509827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.509855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.510167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.510197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.510489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.510519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.510786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.510835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.492 qpair failed and we were unable to recover it. 00:33:50.492 [2024-07-20 19:04:00.511097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.492 [2024-07-20 19:04:00.511127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.511425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.511455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.511819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.511845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.512138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.512165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.512545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.512607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.512887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.512914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.513174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.513203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.513473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.513503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.513821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.513864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.514136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.514166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.514413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.514439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.514803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.514829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.515147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.515177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.515469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.515499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.515784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.515837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.516096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.516137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.516387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.516413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.516668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.516696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.516971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.517002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.517273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.517302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.517576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.517602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.517896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.517924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.518265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.518292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.518582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.518607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.518944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.518974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.519272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.519302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.519593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.519619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.519914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.519941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.520245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.520280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.520647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.520707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.520997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.521024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.521322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.521352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.521660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.521702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.521991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.522019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.522351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.522413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.522699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.522725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.523021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.523051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.523317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.523347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.523613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.523640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.523923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.523953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.524232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.493 [2024-07-20 19:04:00.524257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.493 qpair failed and we were unable to recover it. 00:33:50.493 [2024-07-20 19:04:00.524516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.524541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.524816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.524847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.525097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.525128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.525390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.525414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.525759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.525789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.526069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.526099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.526492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.526544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.526844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.526883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.527117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.527149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.527406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.527432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.527700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.527731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.527999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.528029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.528274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.528301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.528601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.528631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.528925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.528957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.529298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.529325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.529622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.529649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.530018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.530047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.530370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.530413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.530689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.530719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.531011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.531039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.531322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.531348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.531613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.531640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.532017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.532048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.532333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.532360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.532668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.532698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.532990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.533021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.533327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.533354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.533663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.533693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.533952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.533978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.534338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.534364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.534659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.534686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.534963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.534990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.535259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.535285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.535573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.535603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.535897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.535928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.536375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.536440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.536922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.536956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.537197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.537229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.537497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.537524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.537876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.494 [2024-07-20 19:04:00.537907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.494 qpair failed and we were unable to recover it. 00:33:50.494 [2024-07-20 19:04:00.538172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.538202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.538578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.538633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.538901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.538932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.539217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.539247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.539517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.539543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.539812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.539843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.540105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.540134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.540425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.540457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.540755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.540786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.541090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.541120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.541408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.541434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.541782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.541820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.542115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.542143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.542353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.542384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.542703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.542742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.543018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.543046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.543309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.543336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.543590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.543620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.543903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.543930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.544170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.544196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.544452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.544479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.544737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.544767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.545040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.545083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.545383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.545413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.545692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.545721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.545973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.546001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.546290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.546315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.546542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.546568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.546837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.546864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.547119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.547146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.547409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.547436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.547682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.547708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.548013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.548044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.548335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.495 [2024-07-20 19:04:00.548377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.495 qpair failed and we were unable to recover it. 00:33:50.495 [2024-07-20 19:04:00.548649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.548675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.548943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.548974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.549275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.549305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.549560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.549587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.549863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.549894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.550170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.550200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.550590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.550646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.550982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.551016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.551305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.551346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.551608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.551635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.551933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.551964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.552217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.552245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.552519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.552547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.552801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.552829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.553071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.553098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.553358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.553385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.553602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.553629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.553870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.553898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.554130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.554158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.554392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.554418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.554668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.554695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.554945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.554973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.555224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.555250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.555469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.555496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.555708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.555735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.555974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.556002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.556250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.556277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.556482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.556509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.556746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.556773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.557021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.557048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.557253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.557280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.557567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.557594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.557862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.557890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.558105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.558132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.558351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.558378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.558632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.558662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.558941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.558969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.559180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.559207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.559446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.559473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.559740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.559768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.496 [2024-07-20 19:04:00.560034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.496 [2024-07-20 19:04:00.560062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.496 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.560277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.560306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.560523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.560551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.560809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.560837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.561059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.561087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.561325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.561353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.561568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.561595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.561842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.561870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.562091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.562122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.562342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.562370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.562593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.562620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.562869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.562897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.563170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.563197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.563415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.563442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.563698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.563725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.564010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.564037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.564341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.564368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.564638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.564665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.564977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.565004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.565450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.565476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.565720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.565747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.565998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.566025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.566314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.566341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.566613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.566640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.566876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.566904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.567189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.567216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.567518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.567545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.567784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.567818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.568072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.568100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.568374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.568401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.568646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.568708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.568965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.568993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.569260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.569287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.569550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.569577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.569791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.569826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.570066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.570093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.570309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.570336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.570545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.570571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.570873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.570901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.571135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.571162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.571425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.571452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.571702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-20 19:04:00.571729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-20 19:04:00.572006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.572036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.572306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.572332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.572549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.572576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.572789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.572823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.573103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.573130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.573373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.573400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.573636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.573663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Read completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 Write completed with error (sct=0, sc=8) 00:33:50.498 starting I/O failed 00:33:50.498 [2024-07-20 19:04:00.573989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:50.498 [2024-07-20 19:04:00.574114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe66390 is same with the state(5) to be set 00:33:50.498 [2024-07-20 19:04:00.574493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.574538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.574832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.574859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.575104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.575133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.575429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.575456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.575717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.575744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.575987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.576015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.576330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.576363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.576608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.576638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.576911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.576941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.577259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.577302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.577595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.577625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.577876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.577907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.578176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.578211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.578541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.578586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.578852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.578881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.579182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.579227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.579541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.579568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.579849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.579880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.580168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.580215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.580529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.580573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.580977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.581022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.581408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-20 19:04:00.581464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-20 19:04:00.581747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.581774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.582109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.582171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.582487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.582514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.582760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.582787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.583019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.583047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.583291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.583338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.583660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.583691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.583995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.584041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.584382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.584411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.584700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.584727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.585006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.585052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.585337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.585386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.585676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.585703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.586012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.586058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.586455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.586507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.586756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.586783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.587006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.587033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.587295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.587325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.587692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.587720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.588005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.588037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.588378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.588408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.588665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.588693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.588942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.588971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.589220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.589248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.589575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.589607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.589955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.589983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.590273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.590318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.590557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.590584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.590878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.590924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.591230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.591275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.591563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.591590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.591830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.591858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.592098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.592142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.592448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.592497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.592740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.592771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-20 19:04:00.593060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-20 19:04:00.593105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.593379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.593413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.593687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.593715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.593975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.594007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.594244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.594274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.594526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.594556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.594835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.594864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.595101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.595145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.595440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.595470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.595724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.595752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.595970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.595998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.596342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.596379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.596680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.596707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.596917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.596946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.597241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.597271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.597567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.597594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.597854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.597885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.598174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.598201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.598524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.598551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.598813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.598858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.599170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.599201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.599482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.599529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.599772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.599806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.600133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.600164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.600476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.600535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.600813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.600843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.601117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.601167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.601473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.601522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.601798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.601826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.602096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.602128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.602406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.602459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.602709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.602737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.603012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.603069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.603342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.603387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.603637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.603664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.603967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.604012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.604329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.604373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.604618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.604644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.604942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.604986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.605297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.605341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.605612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-20 19:04:00.605639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-20 19:04:00.605902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.605947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.606226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.606272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.606523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.606551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.606842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.606890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.607140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.607185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.607462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.607519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.607735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.607763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.608042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.608097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.608346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.608392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.608721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.608749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.609098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.609144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.609421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.609466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.609785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.609824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.610071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.610116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.610395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.610440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.610690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.610717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.611012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.611058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.611338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.611383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.611623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.611650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.611912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.611957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.612234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.612283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.612536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.612563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.612814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.612852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.613101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.613146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.613484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.613529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.613772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.613808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.614070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.614097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.614334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.614378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.614681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.614712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.614990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.615036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.615336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.615385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.615691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.615718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.615961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.616006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.616308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.616353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.616679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.616710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.616957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.617001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.618563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.618596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.618866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.618911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.619129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.619157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.619367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-20 19:04:00.619395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-20 19:04:00.619633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.619662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.619918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.619963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.620264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.620308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.620656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.620685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.620900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.620928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.621229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.621273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.621619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.621645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.621931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.621976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.622244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.622297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.622619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.622647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.622912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.622958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.623223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.623253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.623537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.623564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.623768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.623811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.624071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.624117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.624413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.624458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.624717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.624744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.625124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.625169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.625473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.625499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.625767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.625814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.626132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.626175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.626457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.626503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.626779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.626817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.627213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.627275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.627565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.627610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.627891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.627920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.628151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.628179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.628495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.628525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.628805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.628849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.629127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.629152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.629452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.629481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.629768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.629835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.630129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.630154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.630448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.630492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.630801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.630857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.631136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.631161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.631411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.631455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.631765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.631818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.632128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.632154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.632448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.632492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-20 19:04:00.632790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-20 19:04:00.632856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.633160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.633186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.633495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.633538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.633854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.633881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.634181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.634206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.634467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.634510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.634750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.634801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.635064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.635090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.635400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.635430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.635742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.635785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.636047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.636073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.636359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.636402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.636685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.636729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.636983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.637010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.637351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.637380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.637677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.637724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.637988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.638015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.638270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.638314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.638615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.638659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.638952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.638978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.639256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.639300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.639596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.639639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.639894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.639920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.640193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.640219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.640546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.640574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.640895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.640922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.641231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.641256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.641569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.641613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.641880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.641911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.642174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.642200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.642691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.642739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.643017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.643045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.643336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.643379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.643745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.643788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.644119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.644159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.644466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-20 19:04:00.644513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-20 19:04:00.644822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.644859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.645179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.645223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.645528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.645557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.645845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.645872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.646255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.646311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.646590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.646637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.646943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.646972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.647329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.647372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.647646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.647689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.647960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.647986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.648270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.648298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.648575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.648620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.648967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.649008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.649427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.649482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.649785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.649821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.650205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.650262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.650590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.650633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.650880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.650909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.651189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.651233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.651516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.651561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.651828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.651855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.652349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.652375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.652689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.652732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.652981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.653008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.653254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.653297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.653550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.653577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.653873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.653900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.654174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.654217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.654501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.654542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.654809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.654836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.655089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.655115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.655514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.655569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.655818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.655864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.656136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.656161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.656431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.656475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.656801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.656852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.657099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.657132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.657409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.657451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.657864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.657890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.658112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.658138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-20 19:04:00.658384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-20 19:04:00.658427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.658806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.658855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.659264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.659320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.660308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.660339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.660868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.660899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.661177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.661203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.661569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.661594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.661850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.661878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.662121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.662149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.662487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.662513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.662760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.662786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.663022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.663048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.663258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.663283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.663529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.663554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.663817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.663852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.664093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.664128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.664344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.664370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.664597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.664621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.664856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.664883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.665121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.665146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.665439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.665464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.665702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.665727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.665972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.665999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.666231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.666257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.666524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.666549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.666763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.666790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.667018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.667044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.667284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.667310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.667603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.667644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.667902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.667930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.668178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.668204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.668483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.668509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.668728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.668760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.669011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.669038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.669265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.669291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.669543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.669568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.669835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.669866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.670079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.670105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.670374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.670399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.670660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.670686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.670914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.670941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.505 qpair failed and we were unable to recover it. 00:33:50.505 [2024-07-20 19:04:00.671179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.505 [2024-07-20 19:04:00.671206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.671474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.671500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.671811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.671853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.672098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.672124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.672387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.672411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.672667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.672692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.672940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.672967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.673215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.673240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.673498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.673523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.673839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.673866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.674080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.674106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.674356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.674382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.674656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.674681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.674921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.674947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.675160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.675200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.675470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.675495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.675733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.675772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.676039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.676065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.676318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.676343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.676588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.676613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.676869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.676897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.677134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.677160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.677430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.677455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.677699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.677725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.677997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.678024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.678344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.678385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.678660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.678686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.678953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.678979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.679221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.679247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.679513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.679538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.679829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.679856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.680071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.680102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.680377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.680403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.680722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.680759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.680996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.681023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.681265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.681291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.681555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.681581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.681828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.681858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.682098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.682124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.682355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.682380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.682620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.506 [2024-07-20 19:04:00.682645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.506 qpair failed and we were unable to recover it. 00:33:50.506 [2024-07-20 19:04:00.682882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.682908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.683169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.683195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.683433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.683459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.683703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.683728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.683953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.683980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.684221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.684247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.684507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.684532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.684807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.684834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.685054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.685080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.685321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.685346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.685562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.685587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.685829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.685862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.686135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.686161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.686413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.686438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.686659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.686684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.686907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.686936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.687209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.687235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.687527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.687568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.687814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.687843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.688084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.688120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.688395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.688420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.688635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.688660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.688875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.688902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.689155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.689180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.689423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.689448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.689683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.689713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.689991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.690017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.690246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.690271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.690510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.690535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.690750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.690774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.691012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.691038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.691282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.691308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.691560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.691585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.691802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.691829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.692044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.692073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.692355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.692380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.692616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.692641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.692856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.692882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.693101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.693127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.693398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.693423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.507 [2024-07-20 19:04:00.693628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.507 [2024-07-20 19:04:00.693653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.507 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.693867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.693894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.694124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.694149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.694373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.694398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.694637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.694667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.694904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.694930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.695138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.695164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.695378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.695403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.695667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.695691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.695936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.695961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.696222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.696247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.696467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.696493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.696730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.696755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.697048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.697081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.697323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.697349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.697623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.697649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.697916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.697941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.698164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.698191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.698433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.698458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.698676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.698701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.698941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.698967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.699204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.699229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.699465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.699491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.699749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.699774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.700002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.700027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.700266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.700291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.700518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.700546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.700814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.700847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.701060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.701085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.701321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.701346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.701586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.701611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.701853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.701884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.702114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.702139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.702409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.702434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.702678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.702703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.508 [2024-07-20 19:04:00.702945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.508 [2024-07-20 19:04:00.702972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.508 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.703213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.703239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.703470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.703495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.703736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.703761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.703980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.704006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.704256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.704281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.704501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.704526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.704800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.704826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.705074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.705107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.705349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.705374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.705620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.705646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.705866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.705893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.706136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.706162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.706405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.706430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.706672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.706697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.706933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.706959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.707192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.707218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.707457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.707482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.707704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.707729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.707963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.707988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.708241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.708267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.708536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.708561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.708799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.708825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.709034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.709073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.709343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.709372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.709651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.709676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.709941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.709967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.710204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.710229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.710442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.710467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.710724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.710749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.710960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.710985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.711258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.711283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.711489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.711514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.711723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.711749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.711991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.712017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.712288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.712313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.712525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.712552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.712771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.712814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.713064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.713093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.713302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.713327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.713544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.713570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.509 [2024-07-20 19:04:00.713817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.509 [2024-07-20 19:04:00.713855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.509 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.714076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.714101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.714371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.714396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.714633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.714658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.714906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.714932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.715170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.715195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.715411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.715447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.715691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.715716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.716023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.716048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.716301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.716326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.716600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.716626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.716914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.716939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.717197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.717223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.717425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.717451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.717677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.717702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.717914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.717941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.718199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.718224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.718462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.718488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.718762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.718788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.719035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.719070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.719286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.719311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.719558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.719584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.719848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.719873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.720076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.720101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.720348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.720373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.720617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.720642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.720880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.720907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.721141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.721168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.721379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.721405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.721616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.721642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.721879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.721907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.722145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.722171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.722396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.722422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.722684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.722709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.722926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.722952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.723172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.723198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.723437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.723462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.723694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.723730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.723969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.723995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.724202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.724227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.724473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.724498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.510 [2024-07-20 19:04:00.724708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.510 [2024-07-20 19:04:00.724733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.510 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.724973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.724998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.725241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.725267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.725488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.725514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.725724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.725749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.725980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.726006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.726280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.726306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.726552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.726577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.726844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.726870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.727134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.727159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.727403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.727428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.727668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.727694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.727910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.727938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.728143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.728169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.728439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.728464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.728681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.728708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.728921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.728947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.729184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.729209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.729432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.729458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.729658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.729684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.729901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.729928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.730143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.730169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.730389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.730415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.730676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.730706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.731007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.731034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.731300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.731326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.731571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.731596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.731831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.731864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.732086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.732112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.732323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.732349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.732598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.732624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.732848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.732874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.733142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.733167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.733454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.733480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.733724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.733749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.734015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.734041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.734601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.734629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.734917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.734945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.735202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.735227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.735468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.735493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.735736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.735762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.511 [2024-07-20 19:04:00.735994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.511 [2024-07-20 19:04:00.736019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.511 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.736262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.736287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.736503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.736528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.736801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.736827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.737064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.737089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.737326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.737354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.737596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.737621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.737875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.737913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.738181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.738209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.738474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.738500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.738771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.738804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.739056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.739082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.739298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.739325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.739538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.739563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.739776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.739809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.740045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.740077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.740312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.740337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.740553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.740578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.740833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.740859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.741106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.741143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.741379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.741404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.741668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.741693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.741926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.741952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.742226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.742256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.742477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.742502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.742708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.742735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.743003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.743029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.743307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.743333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.743571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.743596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.743804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.743830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.744056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.744081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.744329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.744355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.744622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.744647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.744890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.744916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.745157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.745183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.745427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.745463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.512 [2024-07-20 19:04:00.745700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.512 [2024-07-20 19:04:00.745726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.512 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.745974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.746000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.746223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.746248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.746481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.746507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.746739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.746764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.746994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.747020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.747245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.747271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.747509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.747534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.747774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.747807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.748060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.748085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.748320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.748345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.748582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.748608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.748880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.748906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.749162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.749188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.749434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.749463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.749771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.749803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.750061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.750087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.750319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.750344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.750581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.750606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.750819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.750850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.751096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.751121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.751337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.751363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.751596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.751621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.751880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.751906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.752123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.752148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.752411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.752436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.752650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.752677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.752921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.752947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.753191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.753217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.753456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.753481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.753739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.753764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.754009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.754034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.754303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.754328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.754560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.754585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.754825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.754860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.755079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.755104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.755322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.755347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.755619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.755644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.755890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.755916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.756163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.756188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.756396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.756423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.756682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.513 [2024-07-20 19:04:00.756707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.513 qpair failed and we were unable to recover it. 00:33:50.513 [2024-07-20 19:04:00.756944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.756970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.757191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.757217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.757426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.757453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.757718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.757743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.757956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.757982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.758227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.758253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.758475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.758501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.758767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.758809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.759099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.759125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.759393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.759418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.759688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.759713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.759969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.759995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.760220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.760246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.760489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.760518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.760726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.760752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.760980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.761007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.761225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.761251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.761493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.761517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.761737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.761762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.762015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.762041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.762255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.762280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.762544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.762569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.762807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.762834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.763050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.763075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.763319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.763345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.763553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.763578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.763826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.763852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.764091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.764116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.764356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.764381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.764583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.764609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.764853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.764880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.765125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.765150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.765396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.765422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.765656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.765681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.765888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.765914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.766156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.766182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.766421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.766448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.766659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.766684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.766954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.766980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.767216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.767241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.767485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.767514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.514 qpair failed and we were unable to recover it. 00:33:50.514 [2024-07-20 19:04:00.767750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.514 [2024-07-20 19:04:00.767776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.768019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.768044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.768248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.768273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.768509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.768535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.768758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.768784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.769044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.769069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.769291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.769316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.769559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.769584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.769803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.769829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.770074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.770099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.770316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.770342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.770604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.770629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.770871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.770898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.771159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.771200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.771468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.771495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.771734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.771759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.771984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.772011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.772274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.772300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.772579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.772604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.772849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.772875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.773095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.773122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.773382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.773408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.773674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.773699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.773914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.773942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.774183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.774209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.774447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.774473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.774685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.774716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.774977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.775003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.775244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.775269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.775507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.775532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.775774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.775804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.776068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.776094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.776401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.776427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.776690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.776715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.776946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.776972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.777216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.777242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.777488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.777515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.777778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.777811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.778064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.778092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.778356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.778382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.778630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.515 [2024-07-20 19:04:00.778656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.515 qpair failed and we were unable to recover it. 00:33:50.515 [2024-07-20 19:04:00.778872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.778898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.779126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.779152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.779368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.779395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.779635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.779661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.779941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.779968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.780209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.780235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.780473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.780498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.780768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.780799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.781054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.781080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.781356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.781380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.781615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.781641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.781857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.781883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.782111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.782136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.782461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.782500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.782804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.782831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.783092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.783117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.783434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.783459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.783743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.783767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.784052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.784078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.784303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.784328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.784542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.784567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.784787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.784821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.785065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.785090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.785336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.785361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.785689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.785712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.786028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.786059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.786333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.786358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.786580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.786620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.786946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.786972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.787210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.787235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.787511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.787536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.787813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.787840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.788090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.788115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.788381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.788407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.788667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.788693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.516 [2024-07-20 19:04:00.788923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.516 [2024-07-20 19:04:00.788949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.516 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.789230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.789254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.789527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.789553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.789789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.789819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.790053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.790079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.790311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.790337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.790594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.790619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.790885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.790916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.791275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.791328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.791593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.791621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.791877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.791904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.792149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.792174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.792419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.792444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.792763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.792816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.793179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.793234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.793516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.793544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.793820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.793857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.794135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.794161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.794414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.794439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.794679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.794705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.794949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.794977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.795244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.795269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.795483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.795509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.795723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.795765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.796069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.796097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.796357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.796386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.796625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.796650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.796892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.796919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.797167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.797194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.797514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.797539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.797768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.797965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.798208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.798233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.798474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.798557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.798828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.798855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.799085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.799112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.517 [2024-07-20 19:04:00.799488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.517 [2024-07-20 19:04:00.799515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.517 qpair failed and we were unable to recover it. 00:33:50.789 [2024-07-20 19:04:00.799731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.789 [2024-07-20 19:04:00.799758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.789 qpair failed and we were unable to recover it. 00:33:50.789 [2024-07-20 19:04:00.800089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.789 [2024-07-20 19:04:00.800115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.789 qpair failed and we were unable to recover it. 00:33:50.789 [2024-07-20 19:04:00.800323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.789 [2024-07-20 19:04:00.800349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.789 qpair failed and we were unable to recover it. 00:33:50.789 [2024-07-20 19:04:00.800620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.789 [2024-07-20 19:04:00.800646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.789 qpair failed and we were unable to recover it. 00:33:50.789 [2024-07-20 19:04:00.800954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.789 [2024-07-20 19:04:00.800987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.789 qpair failed and we were unable to recover it. 00:33:50.789 [2024-07-20 19:04:00.801299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.789 [2024-07-20 19:04:00.801327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.789 qpair failed and we were unable to recover it. 00:33:50.789 [2024-07-20 19:04:00.801577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.789 [2024-07-20 19:04:00.801604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.789 qpair failed and we were unable to recover it. 00:33:50.789 [2024-07-20 19:04:00.801904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.801932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.802202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.802228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.802469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.802557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.802902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.802930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.803170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.803196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.803538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.803565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.803841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.803868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.804090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.804117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.804357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.804383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.804606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.804632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.804843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.804869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.805075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.805101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.805340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.805366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.805623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.805648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.805894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.805920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.806138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.806165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.806382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.806409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.806646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.806673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.806994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.807022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.807234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.807261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.807476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.807503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.807738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.807764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.808000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.808027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.808264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.808289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.808504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.808530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.808740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.808766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.808984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.809010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.809253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.809283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.809523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.809549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.809758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.809783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.810006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.810031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.810275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.810301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.810542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.810568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.810785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.810816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.811165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.811205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.811461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.811489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.811708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.811734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.811972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.811999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.812246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.812271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.812533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.812558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.812821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.812848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.813096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.813122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.813356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.813381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.813625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.813653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.813871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.813898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.814222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.814247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.814492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.814517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.814757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.814782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.815007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.815033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.815245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.815272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.815511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.815538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.815772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.815812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.816102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.816128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.816441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.816467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.816706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.816732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.817016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.817042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.817261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.817286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.817547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.817573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.790 [2024-07-20 19:04:00.817791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.790 [2024-07-20 19:04:00.817824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.790 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.818060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.818085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.818342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.818367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.818589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.818614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.818873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.818899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.819140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.819166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.819383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.819408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.819623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.819648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.819908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.819934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.820215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.820245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.820456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.820483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.820750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.820776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.820998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.821024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.821281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.821307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.821535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.821561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.821799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.821825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.822031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.822057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.822272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.822298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.822513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.822540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.822748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.822774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.822999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.823025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.823242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.823268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.823503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.823531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.823744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.823769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.824122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.824162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.824437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.824465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.824682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.824710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.825034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.825061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.825286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.825312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.825562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.825588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.825830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.825857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.826079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.826105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.826346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.826372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.826702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.826728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.826941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.826968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.827192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.827218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.827453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.827494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.827725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.827752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.827977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.828003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.828245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.828272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.828530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.828555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.828800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.828827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.829134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.829161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.829444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.829469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.829734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.829759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.829987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.830013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.830251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.830277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.830484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.830509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.830752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.830778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.831000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.831026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.831278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.831309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.831544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.831569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.831772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.831808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.832055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.832080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.832324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.832350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.832563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.832589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.832827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.832853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.833090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.833117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.833337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.833363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.833600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.791 [2024-07-20 19:04:00.833626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.791 qpair failed and we were unable to recover it. 00:33:50.791 [2024-07-20 19:04:00.833836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.833862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.834080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.834106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.834365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.834391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.834616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.834642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.834865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.834892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.835095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.835121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.835327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.835352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.835592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.835618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.835832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.835857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f641c000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.836086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.836129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.836354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.836382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.836599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.836625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.836891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.836919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.837141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.837167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.837432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.837461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.837695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.837722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.838013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.838045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.838285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.838311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.838515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.838541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.838750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.838776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.838993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.839021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.839272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.839298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.839514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.839540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.839754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.839784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.840047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.840073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.840282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.840307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.840563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.840589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.840853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.840881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.841088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.841114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.841332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.841359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.841586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.841612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.841863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.841889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.842100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.842126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.842370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.842396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.842635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.842661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.842906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.842933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.843209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.843235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.843477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.843503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.843719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.843748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.844000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.844027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.844263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.844289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.844497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.844523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.844733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.844760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.845018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.845046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.845287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.845314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.845536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.845566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.845838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.845865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.846121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.792 [2024-07-20 19:04:00.846151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.792 qpair failed and we were unable to recover it. 00:33:50.792 [2024-07-20 19:04:00.846392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.846418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.846653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.846679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.846923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.846950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.847166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.847193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.847437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.847464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.847729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.847756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.848002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.848029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.848350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.848378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.848616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.848648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.848913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.848941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.849151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.849177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.849410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.849439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.849674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.849700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.849933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.849964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.850175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.850201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.850417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.850442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.850686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.850713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.850928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.850954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.851191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.851217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.851487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.851512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.851782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.851813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.852058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.852086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.852356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.852382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.852601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.852626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.852892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.852918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.853153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.853178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.853382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.853408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.853669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.853695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.853906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.853933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.854188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.854214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.854460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.854485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.854749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.854775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.855003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.855028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.855266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.855291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.855537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.855561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.855811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.855838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.856080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.856105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.856347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.856372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.856604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.856629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.856864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.856890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.857164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.857189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.857455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.857480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.857738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.857763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.858033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.858059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.858271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.858296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.858558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.858584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.858900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.858926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.859191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.859217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.859456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.859485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.859741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.859767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.860053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.860079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.860351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.860376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.860617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.860643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.860889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.860915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.861183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.861209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.861452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.793 [2024-07-20 19:04:00.861479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.793 qpair failed and we were unable to recover it. 00:33:50.793 [2024-07-20 19:04:00.861718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.861744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.861956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.861982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.862217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.862243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.862484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.862510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.862726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.862751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.862979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.863005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.863215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.863240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.863451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.863478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.863744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.863769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.864011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.864038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.864251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.864279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.864539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.864564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.864826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.864853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.865071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.865098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.865349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.865374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.865586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.865611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.865850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.865876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.866121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.866147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.866358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.866386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.866653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.866679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.866910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.866936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.867146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.867172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.867411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.867436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.867648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.867673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.867909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.867935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.868149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.868174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.868438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.868464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.868669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.868694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.868929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.868956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.869213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.869238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.869451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.869476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.869691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.869718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.869929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.869960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.870204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.870232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.870475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.870501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.870741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.870766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.871050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.871076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.871335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.871360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.871568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.871593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.871871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.871899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.872173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.872199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.872468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.872494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.872730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.872756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.873026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.873052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.873288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.873314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.873556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.873583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.873856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.873882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.874101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.874127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.874372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.874398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.874667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.874692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.874914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.874942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.875215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.875241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.875477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.875503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.875737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.875763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.876019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.876046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.876286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.876312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.876548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.794 [2024-07-20 19:04:00.876574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.794 qpair failed and we were unable to recover it. 00:33:50.794 [2024-07-20 19:04:00.876779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.876811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.877049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.877075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.877288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.877315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.877556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.877581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.877849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.877875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.878117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.878144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.878386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.878411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.878653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.878679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.878900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.878926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.879172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.879198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.879408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.879436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.879648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.879675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.879936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.879975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.880243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.880269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.880508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.880535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.880771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.880806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.881051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.881079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.881377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.881419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.881738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.881762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.882017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.882044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.882314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.882339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.882614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.882639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.882855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.882882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.883105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.883130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.883376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.883404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.883639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.883666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.883914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.883941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.884150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.884174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.884433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.884458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.884698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.884724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.884939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.884966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.885206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.885231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.885543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.885582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.885825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.885852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.886111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.886137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.886424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.886451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.886753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.886779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.887055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.887081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.887335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.887362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.887693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.887733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.888001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.888028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.888266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.888291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.888508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.888534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.888748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.888774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.889043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.889069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.889324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.889350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.889591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.889616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.889833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.889860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.890140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.890166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.890411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.890436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.890650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.890691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.795 qpair failed and we were unable to recover it. 00:33:50.795 [2024-07-20 19:04:00.890945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.795 [2024-07-20 19:04:00.890972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.891191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.891217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.891521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.891546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.891772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.891804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.892026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.892056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.892312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.892337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.892586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.892611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.892853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.892880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.893132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.893158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.893444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.893484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.893764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.893791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.894101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.894142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.894372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.894397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.894607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.894633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.894873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.894900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.895169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.895194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.895409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.895435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.895638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.895663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.895899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.895925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.896170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.896196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.896439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.896464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.896679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.896705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.896977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.897004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.897290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.897315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.897636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.897676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.897930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.897956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.898193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.898218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.898467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.898493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.898752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.898778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.899000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.899026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.899289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.899314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.899633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.899659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.899920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.899947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.900215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.900241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.900508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.900533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.900836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.900861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.901144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.901169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.901407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.901434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.901684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.901709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.901967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.901993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.902221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.902247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.902503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.902527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.902783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.902813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.903073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.903098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.903314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.903360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.903621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.903646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.903858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.903885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.904126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.904151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.904359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.904385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.904621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.904648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.904908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.904935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.905143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.905169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.905433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.905459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.905714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.796 [2024-07-20 19:04:00.905740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.796 qpair failed and we were unable to recover it. 00:33:50.796 [2024-07-20 19:04:00.906033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.906060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.906305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.906331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.906660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.906685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.906976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.907003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.907269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.907295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.907535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.907562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.907804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.907830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.908100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.908126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.908364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.908390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.908679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.908704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.908943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.908969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.909332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.909358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.909590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.909616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.909935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.909961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.910173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.910200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.910438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.910465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.910704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.910729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.910999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.911025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.911294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.911320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.911546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.911572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.911846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.911872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.912140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.912165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.912404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.912429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.912662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.912687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.912959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.912985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.913260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.913285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.913537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.913562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.913807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.913832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.914098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.914123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.914431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.914457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.914767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.914802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.915020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.915046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.915251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.915276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.915509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.915535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.915775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.915814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.916089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.916114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.916403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.916430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.916660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.916686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.916920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.916946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.917164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.917190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.917448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.917474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.917713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.917739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.917982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.918008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.918226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.918251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.918498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.918525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.918768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.918798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.919015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.919040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.919304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.919329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.919545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.919572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.919813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.919839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.920092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.920117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.920341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.920367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.920591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.920617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.920855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.920895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.921111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.921151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.921399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.921424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.921662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.797 [2024-07-20 19:04:00.921687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.797 qpair failed and we were unable to recover it. 00:33:50.797 [2024-07-20 19:04:00.922030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.922056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.922308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.922334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.922595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.922621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.922867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.922893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.923159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.923184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.923469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.923494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.923742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.923768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.924013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.924039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.924272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.924297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.924568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.924594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.924858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.924884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.925211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.925236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.925519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.925545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.925786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.925820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.926080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.926106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.926346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.926373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.926635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.926661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.926896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.926923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.927158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.927185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.927436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.927463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.927737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.927763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.928017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.928043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.928333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.928358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.928568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.928593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.928845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.928871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.929131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.929156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.929410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.929434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.929676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.929701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.929907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.929934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.930147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.930173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.930412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.930437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.930672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.930711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.930959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.930986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.931204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.931229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.931486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.931512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.931789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.931819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.932054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.932080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.932323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.932348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.932565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.932591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.932833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.932861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.933083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.933108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.933375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.933401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.933643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.933670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.933902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.933928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.934194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.934219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.934431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.934458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.934697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.934722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.934931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.934957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.935205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.935232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.935471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.935496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.935736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.935761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.935994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.936021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.936240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.936265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.936530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.936559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.936824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.798 [2024-07-20 19:04:00.936850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.798 qpair failed and we were unable to recover it. 00:33:50.798 [2024-07-20 19:04:00.937067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.937093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.937353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.937378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.937615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.937640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.937941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.937967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.938185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.938210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.938439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.938464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.938708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.938732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.938985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.939010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.939254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.939279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.939522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.939547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.939785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.939829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.940087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.940113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.940356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.940382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.940616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.940656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.940876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.940902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.941137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.941162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.941428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.941454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.941660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.941685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.941897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.941923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.942161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.942186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.942476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.942501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.942746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.942772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.943023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.943048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.943271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.943299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.943554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.943579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.943915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.943956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.944314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.944339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.944586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.944613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.944836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.944864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.945107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.945134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.945371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.945396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.945604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.945630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.945844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.945870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.946081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.946108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.946367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.946392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.946628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.946654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.946872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.946898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.947148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.947173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.947427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.947458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.947692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.947719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.947940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.947966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.948219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.948245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.948488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.948514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.948758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.948783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.949006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.949033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.949287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.949313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.949577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.949602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.949821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.949849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.950092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.950118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.950359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.950384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.950704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.950730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.951038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.799 [2024-07-20 19:04:00.951064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.799 qpair failed and we were unable to recover it. 00:33:50.799 [2024-07-20 19:04:00.951335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.951362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.951603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.951628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.951866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.951892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.952136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.952161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.952394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.952419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.952636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.952661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.952917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.952942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.953227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.953253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.953490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.953517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.953749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.953774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.954033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.954059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.954303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.954330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.954597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.954622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.954843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.954870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.955088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.955115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.955363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.955388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.955640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.955666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.955883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.955923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.956296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.956321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.956580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.956606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.956843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.956869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.957109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.957134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.957372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.957399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.957666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.957692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.957911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.957937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.958173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.958199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.958464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.958493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.958710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.958736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.958960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.958987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.959198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.959225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.959459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.959484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.959731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.959758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.959976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.960003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.960247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.960273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.960548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.960574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.960820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.960846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.961064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.961090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.961307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.961332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.961541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.961566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.961828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.961854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.962095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.962121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.962364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.962391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.962661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.962686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.962925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.962951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.963196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.963223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.963464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.963490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.963747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.963772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.964014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.964039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.964308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.964334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.964582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.964607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.964851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.964877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.965121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.965147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.965408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.965433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.965656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.965682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.965929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.965956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.966168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.966194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.966409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.800 [2024-07-20 19:04:00.966435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.800 qpair failed and we were unable to recover it. 00:33:50.800 [2024-07-20 19:04:00.966645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.966670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.966909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.966935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.967141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.967168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.967433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.967459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.967670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.967696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.967938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.967964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.968190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.968216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.968484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.968510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.968718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.968743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.968984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.969010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.969248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.969274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.969540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.969565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.969778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.969810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.970067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.970093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.970329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.970354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.970617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.970642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.970884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.970910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.971168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.971194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.971402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.971429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.971689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.971715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.971951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.971977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.972210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.972235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.972486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.972511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.972724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.972764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.972988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.973015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.973253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.973278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.973490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.973532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.973764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.973790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.974024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.974049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.974266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.974291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.974491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.974515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.974788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.974820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.975078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.975104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.975385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.975410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.975659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.975685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.975974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.976000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.976282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.976311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.976543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.976569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.976831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.976857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.977202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.977239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.977505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.977532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.977751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.977778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.978051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.978077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.978386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.978412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.978662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.978687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.978931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.978957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.979169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.979209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.979438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.979463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.979728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.979753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.979989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.980015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.980262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.980287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.980543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.980569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.980815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.980842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.981078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.981103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.801 [2024-07-20 19:04:00.981355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.801 [2024-07-20 19:04:00.981380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.801 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.981621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.981647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.981909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.981935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.982195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.982220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.982489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.982514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.982822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.982848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.983087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.983113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.983356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.983382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.983617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.983643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.983900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.983936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.984177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.984203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.984443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.984470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.984735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.984760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.985004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.985032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.985278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.985303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.985569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.985595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.985832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.985858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.986095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.986121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.986360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.986386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.986643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.986668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.986953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.986979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.987249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.987274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.987485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.987516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.987742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.987768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.988027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.988053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.988287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.988312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.988635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.988675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.989013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.989040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.989287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.989312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.989549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.989574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.989813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.989840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.990084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.990109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.990317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.990343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.990602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.990627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.990841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.990868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.991087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.991112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.991368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.991394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.991640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.991666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.991880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.991907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.992160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.992186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.992441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.992466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.992730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.992756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.993010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.993037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.993274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.993299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.993515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.993541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.993805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.993831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.994072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.994097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.994334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.994360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.994627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.994652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.994997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.995024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.995295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.995321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.995585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.995610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.802 [2024-07-20 19:04:00.995857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.802 [2024-07-20 19:04:00.995883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.802 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:00.996147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:00.996172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:00.996413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:00.996438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:00.996711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:00.996736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:00.997104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:00.997142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:00.997429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:00.997457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:00.997700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:00.997725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:00.997994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:00.998020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:00.998259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:00.998287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:00.998578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:00.998603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:00.998819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:00.998866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:00.999106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:00.999131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:00.999384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:00.999411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:00.999637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:00.999662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:00.999914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:00.999940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.000184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.000210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.000471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.000497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.000761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.000786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.001070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.001096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.001306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.001332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.001593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.001619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.001832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.001858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.002081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.002108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.002348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.002375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.002617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.002644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.002886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.002912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.003154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.003179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.003417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.003444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.003708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.003735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.004001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.004027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.004259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.004285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.004493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.004519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.004798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.004824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.005097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.005123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.005409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.005435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.005691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.005715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.005971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.005998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.006217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.006243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.006482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.006507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.006714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.006740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.006978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.007004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.007237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.007262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.007465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.007491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.007709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.007734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.007951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.007977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.008187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.008213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.008554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.008579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.008858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.008884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.009135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.009160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.009406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.009431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.009673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.009703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.009916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.009942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.010145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.010171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.010432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.010457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.010698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.010723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.010944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.010971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.011244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.011270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.011484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.011511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.803 [2024-07-20 19:04:01.011723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.803 [2024-07-20 19:04:01.011749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.803 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.011979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.012005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.012243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.012269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.012478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.012504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.012765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.012790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.013004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.013030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.013249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.013275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.013499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.013525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.013761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.013787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.014014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.014039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.014308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.014333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.014640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.014665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.014883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.014911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.015138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.015163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.015389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.015415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.015654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.015679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.015927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.015952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.016245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.016286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.016568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.016594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.016894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.016921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.017129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.017154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.017391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.017417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.017658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.017683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.017908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.017933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.018184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.018209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.018467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.018492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.018784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.018817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.019027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.019052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.019291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.019316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.019639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.019678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.019997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.020023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.020265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.020290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.020510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.020542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.020812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.020838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.021074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.021099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.021340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.021367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.021631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.021657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.021921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.021947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.022213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.022239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.022456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.022483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.022732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.022758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.023089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.023130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.023401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.023427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.023749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.023787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.024050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.024076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.024290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.024330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.024604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.024629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.024910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.024936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.025147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.025174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.025424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.025449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.025726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.025751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.026019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.026045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.026307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.026333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.026598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.026622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.026918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.026944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.027181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.027204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.027461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.027486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.027744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.027770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.028044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.804 [2024-07-20 19:04:01.028070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.804 qpair failed and we were unable to recover it. 00:33:50.804 [2024-07-20 19:04:01.028334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.028360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.028624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.028650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.028890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.028916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.029159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.029184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.029429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.029456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.029696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.029722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.029931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.029958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.030163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.030190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.030493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.030518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.030770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.030800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.031019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.031045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.031295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.031321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.031590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.031616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.031860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.031892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.032106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.032131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.032367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.032406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.032632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.032659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.032901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.032928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.033166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.033191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.033434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.033460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.033693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.033720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.033965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.033991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.034256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.034282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.034520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.034545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.034791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.034822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.035037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.035062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.035306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.035332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.035575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.035600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.035865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.035891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.036151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.036176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.036391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.036416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.036654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.036680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.036934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.036959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.037210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.037236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.037501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.037526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.037839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.037864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.038109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.038135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.038399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.038424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.038681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.038706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.038970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.038995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.039240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.039265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.039507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.039533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.039774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.039804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.040068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.040093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.040338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.040365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.040696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.040736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.805 [2024-07-20 19:04:01.041032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.805 [2024-07-20 19:04:01.041058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.805 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.041343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.041368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.041644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.041669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.041936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.041962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.042214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.042238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.042493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.042518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.042781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.042813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.043063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.043092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.043332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.043357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.043619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.043644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.043960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.043985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.044237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.044263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.044502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.044541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.044898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.044924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.045165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.045191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.045523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.045562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.045835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.045862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.046127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.046152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.046423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.046448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.046682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.046710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.046948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.046974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.047219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.047246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.047459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.047484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.047728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.047753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.048019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.048045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.048307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.048333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.048579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.048604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.048825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.048851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.049087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.049114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.049353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.049379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.049650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.049676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.049938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.049964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.050227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.050253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.050525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.050550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.050799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.050825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.051039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.051066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.051315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.051341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.051556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.051582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.051853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.051880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.052104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.052129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.052359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.052385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.052602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.052628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.052844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.052870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.053109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.053135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.053343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.053371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.053616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.053641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.053848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.053874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.054157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.054187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.054400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.054427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.054638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.054665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.054942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.054969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.055221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.055250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.055491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.055517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.055807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.055833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.056071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.056098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.056311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.056336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.056577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.056603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.056808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.056835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.057071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.057098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.057316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.806 [2024-07-20 19:04:01.057342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.806 qpair failed and we were unable to recover it. 00:33:50.806 [2024-07-20 19:04:01.057580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.057606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.057823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.057850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.058093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.058119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.058389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.058415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.058662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.058688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.058934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.058961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.059178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.059204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.059448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.059475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.059750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.059776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.060000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.060028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.060295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.060320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.060525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.060551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.060758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.060783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.061004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.061031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.061269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.061298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.061533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.061559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.061768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.061810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.062064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.062090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.062349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.062375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.062612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.062638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.062879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.062905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.063147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.063173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.063410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.063437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.063648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.063675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.063947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.063974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.064213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.064238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.064457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.064488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.064705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.064736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.064976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.065002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.065263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.065289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.065534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.065559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.065768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.065798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.066065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.066091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.066356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.066385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.066635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.066661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.066922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.066949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.067191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.067217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.067452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.067479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.067724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.067751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.067966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.067991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.068212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.068240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.068491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.068518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.068764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.068790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.069060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.069086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.069325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.069351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.069604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.069633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.069846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.069876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.070098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.070124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.070336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.070362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.070570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.070600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.070848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.070875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.071095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.071122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.071365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.071391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.071637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.071666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.071912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.071939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.072154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.072180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.072389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.072419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.072653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.072679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.807 qpair failed and we were unable to recover it. 00:33:50.807 [2024-07-20 19:04:01.072927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.807 [2024-07-20 19:04:01.072955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.073171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.073198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.073409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.073436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.073648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.073674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.073889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.073916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.074194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.074219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.074452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.074479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.074687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.074716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.074937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.074968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.075188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.075221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.075487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.075517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.075754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.075780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.075994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.076023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.076267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.076296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.076515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.076541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.076784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.076818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.077037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.077064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.077289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.077315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.077528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.077554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.077814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.077841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.078062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.078088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.078369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.078394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.078613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.078639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.078887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.078914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.079161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.079187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.079409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.079435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.079705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.079730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.079997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.080023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.080276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.080301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.080543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.080572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.080814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.080841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.081060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.081086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.081327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.081354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.081609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.081635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.081879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.081905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.082124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.082150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.082387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.082413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.082679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.082706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.082976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.083003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.083240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.083266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.083476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.083506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.083762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.083789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.084023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.084050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.084300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.084325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.084567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.084594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.084860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.084886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.085127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.085153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.808 qpair failed and we were unable to recover it. 00:33:50.808 [2024-07-20 19:04:01.085361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.808 [2024-07-20 19:04:01.085387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.085599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.085625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.085847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.085877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.086098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.086123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.086336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.086362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.086598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.086624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.086861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.086888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.087104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.087131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.087340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.087365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.087574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.087600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.087855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.087892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.088112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.088139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.088357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.088386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.088603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.088628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.088871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.088901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.089119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.089146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.089387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.089414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.089628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.089653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.089866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.089894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.090139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.090166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.090376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.090403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.090644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.090670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.090939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.090966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.091176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.091203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.091427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.091452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.091709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.091736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.091957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.091983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.092192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.092220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.092432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.092458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.092681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.092707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.092937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.092964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.093202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.093228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.093451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.093477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.093723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.093749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.093974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.094001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.094249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.094274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.094481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.094507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.094716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.094742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.094988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.095015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.095278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.095304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.095569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.095595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.095808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.095834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.096125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.096155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.096416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.096442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.096669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.096695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.096942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.096968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.097188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.097213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.097448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.097477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.097700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.097726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:50.809 [2024-07-20 19:04:01.097968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.809 [2024-07-20 19:04:01.097994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:50.809 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.098232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.098260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.098508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.098539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.098785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.098818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.099035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.099061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.099283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.099309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.099551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.099577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.099825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.099852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.100059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.100085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.100321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.100347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.100557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.100582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.100803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.100829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.101076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.101103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.101421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.101447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.101684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.101712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.101999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.102025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.102274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.102300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.102514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.102540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.102778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.102810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.103055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.103081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.103293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.103319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.103559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.103585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.103829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.103858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.104083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.104109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.104323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.104349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.104568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.104594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.104839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.104865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.105081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.105107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.105384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.105410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.105658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.105684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.105886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.105912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.082 [2024-07-20 19:04:01.106122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.082 [2024-07-20 19:04:01.106148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.082 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.106361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.106391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.106631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.106657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.106879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.106908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.107154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.107181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.107415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.107440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.107662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.107687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.107901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.107928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.108147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.108172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.108412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.108441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.108658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.108684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.108910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.108936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.109181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.109207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.109422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.109448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.109714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.109740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.109978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.110004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.110261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.110291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.110532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.110558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.110773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.110806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.111048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.111077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.111316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.111342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.111581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.111607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.111845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.111874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.112087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.112113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.112330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.112356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.112572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.112603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.112852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.112879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.113120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.113149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.113362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.113388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.113608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.113639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.113897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.113923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.114170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.114196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.114450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.114476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.114723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.114749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.114962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.114987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.115229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.115255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.115469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.115495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.115764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.115790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.116033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.116061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.116302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.116328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.116573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.116598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.116857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.116884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.117128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.117155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.117397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.117425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.117680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.117704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.117955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.117981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.118260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.118285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.118549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.118575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.118833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.118858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.119067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.119092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.119307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.119331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.119586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.119611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.119851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.119877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.120098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.120124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.083 [2024-07-20 19:04:01.120344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.083 [2024-07-20 19:04:01.120369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.083 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.120603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.120629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.120898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.120925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.121140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.121166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.121379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.121419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.121781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.121812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.122086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.122111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.122323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.122349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.122572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.122611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.122912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.122938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.123144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.123169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.123369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.123395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.123602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.123629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.123918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.123945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.124219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.124244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.124464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.124493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.124748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.124775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.125017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.125043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.125370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.125395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.125642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.125668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.125952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.125978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.126245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.126272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.126481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.126507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.126769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.126800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.127035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.127060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.127301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.127326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.127535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.127562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.127812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.127838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.128055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.128080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.128330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.128355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.128595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.128621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.128856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.128882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.129125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.129151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.129385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.129411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.129646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.129671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.129915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.129942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.130209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.130235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.130532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.130572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.130919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.130945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.131181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.131207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.131455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.131481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.131701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.131726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.131980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.132006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.132242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.132269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.132483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.132509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.132758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.132784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.133040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.133066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.133304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.133332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.133580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.133606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.133847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.133873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.134113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.134138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.134430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.134456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.134700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.134726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.134970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.084 [2024-07-20 19:04:01.134996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.084 qpair failed and we were unable to recover it. 00:33:51.084 [2024-07-20 19:04:01.135206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.135233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.135454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.135484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.135721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.135747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.135964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.135990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.136237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.136262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.136550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.136575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.136910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.136937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.137172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.137198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.137445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.137470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.137753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.137778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.138052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.138077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.138319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.138345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.138616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.138642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.138859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.138887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.139155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.139181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.139487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.139512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.139713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.139738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.139949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.139990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.140247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.140272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.140534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.140560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.140879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.140905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.141122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.141148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.141381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.141408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.141666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.141692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.141932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.141958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.142201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.142226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.142493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.142519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.142834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.142860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.143111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.143137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.143407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.143432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.143719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.143744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.144012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.144038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.144343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.144367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.144640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.144665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.144888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.144914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.145125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.145165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.145417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.145442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.145684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.145710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.145934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.145959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.146240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.146266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.146578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.146604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.146869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.146900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.147136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.147161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.147380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.147421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.147667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.147694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.147927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.085 [2024-07-20 19:04:01.147953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.085 qpair failed and we were unable to recover it. 00:33:51.085 [2024-07-20 19:04:01.148189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.148214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.148472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.148498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.148763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.148788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.149010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.149036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.149321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.149346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.149632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.149657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.149898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.149924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.150187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.150212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.150454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.150479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.150749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.150774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.151010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.151050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.151305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.151330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.151595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.151620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.151832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.151858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.152128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.152154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.152394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.152420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.152685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.152710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.152963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.152990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.153230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.153255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.153551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.153576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.153842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.153868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.154108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.154134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.154360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.154385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.154644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.154669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.155005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.155044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.155323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.155348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.155574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.155600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.155813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.155840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.156049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.156074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.156318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.156343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.156584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.156609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.156864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.156890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.157132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.157159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.157394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.157420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.157659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.157684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.157915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.157945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.158253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.158278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.158561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.158586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.158815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.158842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.159057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.159082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.159333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.159359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.159601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.159628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.159876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.159902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.160117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.160144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.160380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.160405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.160616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.160641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.160888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.160914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.161158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.161185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.161451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.161477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.161720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.161745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.162082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.162123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.162371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.162396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.162614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.162655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.162935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.162962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.163203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.163228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.163493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.163519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.163884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.163909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.086 [2024-07-20 19:04:01.164180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.086 [2024-07-20 19:04:01.164205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.086 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.164476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.164502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.164745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.164772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.165017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.165043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.165287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.165312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.165583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.165609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.165856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.165883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.166148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.166173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.166486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.166512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.166773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.166805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.167070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.167096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.167341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.167366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.167631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.167656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.167978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.168003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.168266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.168292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.168529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.168555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.168823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.168849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.169111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.169137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.169374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.169417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.169685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.169711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.169985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.170012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.170276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.170301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.170588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.170613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.170877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.170903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.171118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.171158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.171366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.171392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.171643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.171670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.171943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.171970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.172229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.172254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.172561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.172586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.172868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.172894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.173128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.173154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.173396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.173421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.173681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.173707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.173947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.173973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.174236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.174262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.174571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.174596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.174888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.174914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.175160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.175185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.175433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.175460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.175694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.175719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.175983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.176010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.176290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.176316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.176549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.176575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.176838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.176864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.177080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.177107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.177374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.177399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.177660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.177685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.177944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.177970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.178211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.178237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.178498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.178524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.178758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.178783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.179012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.179038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.179281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.179306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.179545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.087 [2024-07-20 19:04:01.179585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.087 qpair failed and we were unable to recover it. 00:33:51.087 [2024-07-20 19:04:01.179787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.179820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.180107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.180133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.180378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.180403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.180667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.180696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.180939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.180965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.181227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.181253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.181492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.181534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.181787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.181827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.182066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.182091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.182330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.182355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.182604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.182630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.182870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.182896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.183157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.183181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.183429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.183456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.183693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.183719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.183954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.183981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.184224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.184250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.184500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.184526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.184749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.184789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.185056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.185081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.185357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.185383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.185619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.185645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.185889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.185916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.186158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.186184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.186495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.186535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.186808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.186845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.187093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.187118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.187378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.187403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.187642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.187668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.187888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.187916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.188184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.188210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.188431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.188471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.188778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.188825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.189070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.189095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.189333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.189359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.189619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.189645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.189898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.189925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.190143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.190169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.190412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.190437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.190703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.190728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.190948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.190975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.191185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.191211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.191435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.191459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.191741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.191771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.192003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.192029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.192291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.192317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.192551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.192576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.192784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.192815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.193030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.088 [2024-07-20 19:04:01.193056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.088 qpair failed and we were unable to recover it. 00:33:51.088 [2024-07-20 19:04:01.193345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.193371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.193616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.193642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.193877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.193904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.194147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.194173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.194415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.194440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.194676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.194701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.194975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.195001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.195259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.195284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.195585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.195611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.195870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.195897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.196118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.196144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.196369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.196395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.196644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.196670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.196882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.196910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.197151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.197179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.197400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.197426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.197650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.197675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.197984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.198011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.198222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.198249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.198461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.198486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.198727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.198752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.199007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.199034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.199271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.199297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.199501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.199528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.199803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.199840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.200077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.200105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.200339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.200365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.200606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.200633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.200891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.200918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.201183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.201209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.201451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.201477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.201725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.201752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.202000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.202027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.202297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.202323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.202568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.202599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.202820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.202846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.203085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.203110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.203381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.203407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.203646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.203671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.203913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.203940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.204178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.204203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.204466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.204492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.204735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.204760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.205003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.205029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.205240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.205266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.205482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.205510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.205775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.205805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.206045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.206071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.206360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.206385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.206708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.206749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.206992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.207018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.207241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.207266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.207486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.207511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.207753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.207779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.208060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.208086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.208350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.208376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.208650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.208676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.089 [2024-07-20 19:04:01.208941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.089 [2024-07-20 19:04:01.208968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.089 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.209208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.209234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.209493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.209518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.209730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.209756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.210006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.210032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.210279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.210305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.210600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.210626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.210832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.210858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.211116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.211141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.211381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.211407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.211619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.211645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.211882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.211908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.212122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.212149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.212389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.212416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.212657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.212683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.212922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.212948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.213164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.213189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.213402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.213431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.213670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.213696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.213943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.213969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.214235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.214260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.214503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.214530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.214782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.214813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.215058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.215084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.215328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.215354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.215609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.215635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.215898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.215925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.216142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.216167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.216436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.216461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.216701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.216727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.216963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.216989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.217229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.217255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.217464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.217490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.217728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.217755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.218000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.218027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.218267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.218292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.218527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.218553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.218775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.218805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.219046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.219073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.219319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.219345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.219577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.219602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.219822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.219848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.220069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.220094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.220310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.220336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.220616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.220658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.220910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.220939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.221159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.221185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.221428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.221456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.221731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.221757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.221980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.222007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.222247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.222273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.222488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.222514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.222727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.222753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.222971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.222997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.223262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.223288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.223551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.223577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.223836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.223862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.224107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.224132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.090 [2024-07-20 19:04:01.224353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.090 [2024-07-20 19:04:01.224378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.090 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.224604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.224630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.224928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.224954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.225195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.225221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.225464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.225489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.225787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.225818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.226061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.226086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.226328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.226354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.226575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.226601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.226813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.226839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.227072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.227096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.227314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.227340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.227579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.227605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.227853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.227884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.228102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.228127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.228390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.228415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.228664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.228689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.228907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.228934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.229171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.229196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.229431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.229456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.229700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.229725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.229991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.230017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.230282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.230307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.230546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.230571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.230835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.230861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.231108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.231133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.231376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.231403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.231629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.231654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.231897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.231923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.232161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.232186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.232453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.232478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.232747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.232772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.233072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.233111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.233369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.233395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.233620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.233645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.233890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.233918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.234195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.234221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.234457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.234483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.234747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.234772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.235029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.235065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.235303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.235334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.235573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.235599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.235848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.235874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.236138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.236163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.236401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.236426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.236664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.236690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.236929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.236954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.237199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.237224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.237466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.237491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.237732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.237757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.237972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.237997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.091 qpair failed and we were unable to recover it. 00:33:51.091 [2024-07-20 19:04:01.238258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.091 [2024-07-20 19:04:01.238286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.238552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.238577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.238787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.238821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.239053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.239078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.239318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.239343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.239563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.239588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.239821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.239847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.240057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.240082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.240297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.240323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.240536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.240561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.240779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.240812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.241057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.241082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.241302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.241327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.241585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.241610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.241848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.241874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.242158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.242184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.242451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.242477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.242711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.242737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.242974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.243000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.243211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.243238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.243508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.243534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.243770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.243802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.244023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.244050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.244320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.244345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.244584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.244609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.244839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.244865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.245106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.245131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.245350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.245375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.245623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.245648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.245891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.245921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.246141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.246167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.246413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.246438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.246649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.246673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.246890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.246916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.247154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.247180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.247420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.247445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.247682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.247707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.247948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.247974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.248218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.248243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.248504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.248529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.248765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.248789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.249047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.249072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.249312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.249338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.249573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.249599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.249814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.249844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.250070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.250096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.250336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.250363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.250610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.250635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.250883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.250915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.251159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.251184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.251420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.251446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.251655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.251681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.251945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.251971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.252188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.252213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.252452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.252477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.252754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.252779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.253010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.253035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.092 qpair failed and we were unable to recover it. 00:33:51.092 [2024-07-20 19:04:01.253247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.092 [2024-07-20 19:04:01.253274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.253483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.253508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.253747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.253772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.254002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.254028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.254291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.254317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.254559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.254584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.254853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.254879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.255096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.255121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.255379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.255405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.255646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.255672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.255895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.255921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.256138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.256163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.256375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.256406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.256623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.256649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.256862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.256888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.257108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.257133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.257354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.257379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.257614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.257639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.257865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.257893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.258147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.258174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.258397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.258423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.258663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.258689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.258945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.258971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.259209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.259246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.259508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.259533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.259745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.259770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.260007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.260033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.260242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.260269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.260507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.260534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.260785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.260816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.261078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.261103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.261321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.261348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.261612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.261637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.261883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.261909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.262116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.262143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.262382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.262407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.262643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.262668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.262883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.262909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.263173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.263198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.263412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.263438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.263650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.263675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.263920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.263946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.264152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.264177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.264442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.264467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.264707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.264732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.264981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.265006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.265307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.265332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.265566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.265591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.265797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.265823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.266035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.266062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.266327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.266352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.266598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.266623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.266886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.266917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.267140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.267166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.267405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.267430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.267644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.267671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.093 [2024-07-20 19:04:01.267926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.093 [2024-07-20 19:04:01.267953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.093 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.268199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.268224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.268473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.268500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.268737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.268763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.269004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.269030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.269272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.269298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.269510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.269536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.269808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.269835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.270084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.270111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.270355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.270381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.270619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.270645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.270906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.270932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.271204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.271229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.271469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.271494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.271702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.271727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.271942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.271968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.272175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.272202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.272435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.272461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.272720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.272746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.272989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.273016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.273299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.273324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.273565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.273591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.273834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.273860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.274079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.274120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.274360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.274385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.274596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.274622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.274863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.274889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.275112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.275153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.275431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.275457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.275686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.275712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.275954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.275980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.276217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.276243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.276475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.276500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.276721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.276746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.277003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.277029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.277274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.277299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.277513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.277543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.277756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.277782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.278080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.278106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.278396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.278422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.278630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.278656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.278896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.278922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.279124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.279151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.279386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.279411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.279671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.279696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.279936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.279962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.280244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.280268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.280508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.280533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.280749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.280774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.281006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.281032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.281257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.281283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.094 qpair failed and we were unable to recover it. 00:33:51.094 [2024-07-20 19:04:01.281563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.094 [2024-07-20 19:04:01.281588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.281868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.281894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.282135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.282160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.282402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.282427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.282671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.282697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.282964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.282990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.283234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.283259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.283477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.283503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.283706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.283731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.283977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.284003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.284252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.284279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.284490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.284517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.284747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.284774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.285040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.285067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.285310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.285335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.285548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.285573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.285782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.285824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.286087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.286112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.286357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.286382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.286636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.286661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.286926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.286952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.287200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.287226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.287468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.287495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.287735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.287762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.288018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.288044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.288277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.288302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.288546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.288572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.288814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.288840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.289081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.289106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.289340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.289367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.289577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.289603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.289818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.289844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.290082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.290108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.290326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.290351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.290567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.290591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.290851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.290877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.291112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.291137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.291375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.291400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.291606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.291633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.291849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.291875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.292080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.292106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.292344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.292369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.292576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.292601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.292839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.292864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.293096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.293122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.293359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.293385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.293650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.293675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.293914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.293940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.294174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.294200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.294436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.294461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.294682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.294708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.294920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.294947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.095 [2024-07-20 19:04:01.295211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.095 [2024-07-20 19:04:01.295241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.095 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.295526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.295551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.295762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.295788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.296016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.296043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.296271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.296296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.296538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.296565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.296834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.296861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.297105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.297130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.297346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.297371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.297637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.297662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.297897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.297923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.298164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.298189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.298451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.298476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.298687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.298714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.298961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.298987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.299232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.299258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.299506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.299532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.299768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.299802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.300050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.300076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.300342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.300367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.300604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.300630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.300852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.300878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.301114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.301139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.301411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.301437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.301650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.301675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.301894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.301920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.302179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.302205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.302445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.302470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.302709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.302735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.302951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.302977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.303185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.303210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.303427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.303453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.303665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.303692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.303915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.303941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.304151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.304176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.304414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.304441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.304650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.304677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.304940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.304966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.305175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.305200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.305412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.305439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.305674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.305704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.305914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.305941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.306202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.306228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.306440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.306465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.306684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.306713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.306936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.306964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.307177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.307202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.307440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.307465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.307678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.307705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.307971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.307997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.308238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.308264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.308505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.308530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.308774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.308804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.309020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.309045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.096 [2024-07-20 19:04:01.309254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.096 [2024-07-20 19:04:01.309279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.096 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.309494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.309519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.309760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.309786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.310104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.310130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.310373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.310398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.310615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.310641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.310904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.310930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.311174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.311199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.311407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.311433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.311648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.311675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.311895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.311921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.312129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.312156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.312365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.312390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.312633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.312658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.312926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.312951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.313172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.313197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.313435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.313461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.313674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.313700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.313916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.313943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.314181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.314207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.314428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.314453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.314672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.314697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.314918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.314946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.315155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.315181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.315393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.315419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.315664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.315689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.315901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.315932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.316147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.316172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.316390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.316417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.316626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.316651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.316888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.316914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.317131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.317156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.317369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.317394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.317631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.317656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.317923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.317949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.318162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.318187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.318431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.318456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.318714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.318739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.318956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.318983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.319220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.319245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.319515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.319540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.319868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.319893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.320107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.320134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.320384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.320409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.320624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.320649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.320912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.320937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.321157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.321182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.321425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.321450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.321678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.321703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.321943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.321969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.322183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.322208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f642c000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.322433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.322477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.322692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.322723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.322947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.322975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.323246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.323273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.323488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.323515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.323714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.323740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.324005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.097 [2024-07-20 19:04:01.324032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.097 qpair failed and we were unable to recover it. 00:33:51.097 [2024-07-20 19:04:01.324353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.324379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.324620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.324646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.324881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.324909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.325184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.325210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.325431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.325458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.325734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.325760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.326000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.326026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.326244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.326271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.326540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.326571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.326800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.326826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.327091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.327117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.327405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.327430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.327648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.327675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.327913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.327940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.328177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.328203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.328446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.328474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.328741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.328767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.329000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.329026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.329266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.329292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.329545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.329572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.329846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.329873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.330096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.330125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.330362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.330387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.330622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.330648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.330902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.330929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.331144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.331169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.331403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.331432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.331679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.331705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.331926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.331952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.332164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.332189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.332402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.332429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.332687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.332713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.332964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.332990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.333229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.333255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.333493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.333518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.333730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.333756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.334073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.334099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.334343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.334369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.334607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.334635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.334870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.334896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.335121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.335150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.335361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.335388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.335628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.335656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.335872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.335903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.336146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.336171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.336386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.336411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.336655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.336681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.336901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.336932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.337170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.337205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.337436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.337462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.337681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.337707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.098 qpair failed and we were unable to recover it. 00:33:51.098 [2024-07-20 19:04:01.337968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.098 [2024-07-20 19:04:01.337994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.338223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.338250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.338468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.338494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.338704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.338732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.339001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.339029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.339277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.339302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.339510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.339535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.339805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.339832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.340139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.340167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.340430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.340458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.340672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.340698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.340927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.340954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.341196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.341222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.341464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.341489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.341751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.341777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.342013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.342043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.342277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.342305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.342546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.342571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.342808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.342845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.343056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.343083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.343301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.343328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.343560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.343586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.343812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.343852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.344063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.344090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.344337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.344367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.344641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.344667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.344944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.344971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.345212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.345238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.345442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.345468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.345719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.345745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.345974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.346002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.346220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.346246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.346485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.346510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.346724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.346749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.346969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.346995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.347262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.347289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.347549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.347575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.347802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.347838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.348055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.348081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.348289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.348315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.348554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.348581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.348791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.348824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.349046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.349073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.349287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.349313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.349530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.349557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.349763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.349789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.350071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.350097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.350344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.350370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.350606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.350635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.350916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.350943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.351159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.351185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.351458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.351499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.351713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.351740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.351957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.351985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.352224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.099 [2024-07-20 19:04:01.352249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.099 qpair failed and we were unable to recover it. 00:33:51.099 [2024-07-20 19:04:01.352483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.352509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.352760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.352785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.353008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.353033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.353272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.353297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.353511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.353538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.353749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.353775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.354034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.354059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.354276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.354301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.354531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.354557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.354799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.354840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.355055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.355080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.355299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.355324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.355559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.355585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.355805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.355831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.356116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.356141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.356381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.356408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.356610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.356636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.356851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.356877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.357127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.357153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.357358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.357383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.357620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.357645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.357860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.357886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.358141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.358166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.358381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.358407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.358616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.358641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.358852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.358877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.359076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.359102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.359337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.359362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.359575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.359600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.359817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.359843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.360080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.360105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.360341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.360366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.360632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.360657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.360874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.360900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.361138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.361164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.361380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.361405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.361618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.361648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.361865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.361891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.362128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.362153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.362393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.362418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.362651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.362677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.362910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.362936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.363151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.363176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.363385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.363410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.363646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.363671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.363938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.363964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.364200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.364224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.364459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.364484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.364722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.364747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.364947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.364973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.365233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.365272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.365524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.365553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.365803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.365830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.366058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.366084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.366342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.366368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.366700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.366740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.367010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.367036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.367253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.100 [2024-07-20 19:04:01.367279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.100 qpair failed and we were unable to recover it. 00:33:51.100 [2024-07-20 19:04:01.367512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.367538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.367778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.367821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.368038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.368064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.368271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.368297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.368540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.368566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.368834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.368866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.369088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.369113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.369353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.369377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.369619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.369645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.369894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.369921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.370142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.370168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.370430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.370455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.370669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.370694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.370932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.370958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.371174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.371214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.371472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.371497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.371762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.371788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.372034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.372060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.372303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.372328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.372603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.372628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.372896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.372922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.373135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.373160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.373422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.373447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.373755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.373781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.374031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.374057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.374346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.374371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.374683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.374708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.374967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.374993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.375212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.375237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.375476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.375501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.375713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.375738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.375959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.375987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.376227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.376252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.376512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.376538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.376770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.376805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.377025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.377050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.377305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.377331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.377561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.377587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.377827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.377854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.378092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.378117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.378359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.378384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.378715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.378755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.379039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.379065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.379416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.379456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.379700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.379725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.379942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.379973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.380209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.380234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.380434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.380459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.380699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.380727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.380935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.380961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.101 [2024-07-20 19:04:01.381203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.101 [2024-07-20 19:04:01.381230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.101 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.381487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.381512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.381744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.381769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.382022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.382048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.382351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.382378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.382716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.382742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.382956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.382983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.383246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.383271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.383586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.383611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.383882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.383910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.384173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.384199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.384448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.384474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.384742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.384768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.385015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.385041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.385305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.385330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.385541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.385567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.385806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.385833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.386083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.386109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.386356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.386384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.386624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.386651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.386933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.386959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.387234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.387259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.387507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.387532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.387770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.387800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.388050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.388076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.388291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.388333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.388663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.388688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.388956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.388982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.389194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.389221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.389531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.389557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.389833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.389859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.390071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.390097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.390354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.390379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.102 [2024-07-20 19:04:01.390587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.102 [2024-07-20 19:04:01.390613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.102 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.390832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.390859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.391099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.391129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.391380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.391409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.391656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.391681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.391955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.391981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.392187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.392212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.392465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.392490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.392758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.392783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.393057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.393083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.393296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.393321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.393534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.393561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.393787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.393819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.394058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.394083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.394325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.394351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.394566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.394591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.394839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.394866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.395103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.395129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.395391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.395417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.395634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.376 [2024-07-20 19:04:01.395659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.376 qpair failed and we were unable to recover it. 00:33:51.376 [2024-07-20 19:04:01.395923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.395949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.396191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.396216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.396467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.396492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.396723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.396749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.396992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.397018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.397261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.397286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.397483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.397508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.397750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.397775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.398017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.398042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6424000b90 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.398296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.398337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.398593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.398620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.398868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.398894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.399135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.399161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.399400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.399425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.399640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.399665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.399900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.399939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.400184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.400208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.400481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.400506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.400741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.400766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.401006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.401032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.401248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.401273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.401536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.401561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.401804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.401830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.402069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.402095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.402333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.402359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.402575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.402600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.402867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.402893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.403107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.403133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.403401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.403426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.403635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.403660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.403875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.403902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.404145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.404170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.404395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.404420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.404681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.404706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.404931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.404957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.405199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.405224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.405440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.405470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.405711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.405736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.405968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.405993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.406256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.406281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.406515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.406540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.406778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.406810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.407054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.407079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.407343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.407368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.407639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.407664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.407902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.407928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.408149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.408175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.408390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.408415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.408634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.408659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.408902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.408927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.409148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.409173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.409414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.409439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.409672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.409697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.409911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.409937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.410205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.410231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.410470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.410497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.410763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.410789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.411009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.411035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.411271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.411296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.411534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.411559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.411827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.411853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.412099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.412124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.412361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.412387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.412614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.412639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.412893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.412919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.413165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.413190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.413450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.413475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.413697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.413722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.413936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.413962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.414177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.414203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.414487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.414513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.414732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.414757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.415001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.415027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.415263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.415288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.377 qpair failed and we were unable to recover it. 00:33:51.377 [2024-07-20 19:04:01.415530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.377 [2024-07-20 19:04:01.415555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.415821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.415847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.416090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.416116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.416373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.416403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.416613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.416641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.416881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.416908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.417176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.417201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.417438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.417463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.417705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.417730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.417946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.417973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.418247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.418272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.418513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.418538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.418775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.418806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.419059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.419085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.419321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.419347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.419559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.419585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.419801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.419827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.420081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.420106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.420383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.420408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.420644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.420669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.420936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.420962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.421203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.421228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.421465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.421490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.421710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.421735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.421979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.422005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.422243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.422268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.422509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.422535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.422776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.422806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.423061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.423086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.423352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.423377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.423612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.423641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.423881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.423907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.424144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.424169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.424440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.424465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.424698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.424723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.424939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.424965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.425186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.425211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.425445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.425470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.425684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.425709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.425944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.425970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.426208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.426233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.426473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.426497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.426770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.426799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.427039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.427068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.427340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.427365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.427584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.427611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.427855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.427885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.428151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.428176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.428412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.428437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.428677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.428702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.428916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.428942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.429149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.429174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.429392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.429417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.429628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.429653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.429862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.429888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.430118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.430143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.430404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.430430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.430647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.430672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.430892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.430918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.431152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.431178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.431425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.431451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.431687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.431712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.431928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.431953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.432172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.432198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.432464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.432489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.432728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.432753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.432999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.433025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.433234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.433259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.433492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.433517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.433814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.433839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.434079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.434104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.434335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.434364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.378 qpair failed and we were unable to recover it. 00:33:51.378 [2024-07-20 19:04:01.434609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.378 [2024-07-20 19:04:01.434634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.434897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.434923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.435166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.435191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.435404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.435428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.435689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.435714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.435981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.436007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.436276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.436302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.436533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.436558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.436813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.436840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.437086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.437112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.437356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.437382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.437623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.437649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.437854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.437880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.438129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.438154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.438372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.438399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.438634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.438660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.438918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.438944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.439183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.439209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.439427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.439452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.439661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.439686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.439926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.439951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.440190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.440216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.440479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.440505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.440720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.440745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.441012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.441037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.441281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.441308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.441542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.441571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.441817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.441843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.442088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.442113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.442352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.442377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.442615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.442640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.442857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.442883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.443101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.443126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.443360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.443386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.443621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.443647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.443913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.443939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.444203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.444229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.444497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.444522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.444772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.444807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.445052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.445078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.445333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.445359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.445601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.445626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.445894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.445920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.446162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.446187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.446398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.446423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.446628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.446653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.446923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.446949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.447191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.447216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.447455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.447480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.447725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.447750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.447988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.448014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.448254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.448279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.448520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.448545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.448785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.448844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.449133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.449158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.449421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.449446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.449692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.449717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.449958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.449985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.450226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.450251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.450486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.450512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.450746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.450771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.451038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.451065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.451312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.451338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.451605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.451630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.451871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.451897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.452136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.452163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.452404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.452429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.452668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.452697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.452966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.452993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.453267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.453292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.453531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.453556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.453770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.453802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.454048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.379 [2024-07-20 19:04:01.454073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.379 qpair failed and we were unable to recover it. 00:33:51.379 [2024-07-20 19:04:01.454289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.454314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.454556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.454581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.454825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.454851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.455115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.455140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.455378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.455403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.455647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.455675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.455925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.455951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.456228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.456254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.456517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.456543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.456753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.456778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.457019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.457044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.457305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.457331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.457565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.457590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.457827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.457853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.458093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.458118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.458327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.458352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.458618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.458644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.458881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.458907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.459170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.459196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.459411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.459438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.459681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.459707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.459988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.460018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.460264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.460289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.460506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.460531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.460741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.460767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.460987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.461014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.461222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.461247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.461454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.461479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.461688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.461713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.461931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.461957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.462196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.462221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.462488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.462513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.462738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.462763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.462981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.463007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.463243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.463268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.463503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.463529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.463748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.463773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.464044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.464070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.464291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.464316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.464561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.464586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.464826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.464852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.465096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.465121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.465332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.465357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.465595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.465620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.465867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.465893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.466133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.466158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.466390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.466416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.466678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.466703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.466943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.466969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.467214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.467239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.467457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.467484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.467703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.467728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.467972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.467997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.468213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.468240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.468476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.468501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.468767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.468801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.469067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.469092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.469303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.469328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.469536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.469561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.469782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.469818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.470063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.470088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.470369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.470394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.470628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.470657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.470923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.470949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.471192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.471217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.471457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.471482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.471692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.471717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.471958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.471983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.472227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.472253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.472492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.472517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.472750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.472775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.473024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.473049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.473289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.473314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.473550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.473575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.473784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.473822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.474066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.474091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.474338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.474363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.474598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.474623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.474838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.474864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.380 [2024-07-20 19:04:01.475099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.380 [2024-07-20 19:04:01.475124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.380 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.475365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.475390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.475607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.475633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.475844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.475871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.476166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.476191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.476430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.476457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.476722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.476748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.476992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.477019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.477289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.477314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.477553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.477579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.477817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.477843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.478057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.478084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.478326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.478351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.478590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.478615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.478826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.478853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.479117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.479141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.479357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.479383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.479645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.479670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.479914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.479940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.480153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.480178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.480423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.480448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.480666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.480691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.480898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.480923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.481130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.481155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.481424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.481449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.481686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.481711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.481989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.482014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.482255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.482280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.482518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.482543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.482753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.482779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.483031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.483056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.483305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.483330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.483569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.483594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.483835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.483861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.484102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.484127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.484333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.484359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.484594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.484619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.484832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.484857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.485104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.485130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.485338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.485363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.485602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.485627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.485873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.485900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.486155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.486180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.486422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.486447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.486658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.486683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.486946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.486971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.487190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.487217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.487485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.487510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.487748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.487773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.488026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.488051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.488298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.488323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.488540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.488569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.488809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.488835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.489053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.489078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.489294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.489319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.489553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.489579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.489822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.489848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.490087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.490112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.490377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.490403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.490667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.490692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.490910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.490936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.491206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.491231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.491471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.491496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.491709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.491734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.491946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.491971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.492245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.492270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.492517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.492543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.492757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.492782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.493003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.493029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.493271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.493296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.381 qpair failed and we were unable to recover it. 00:33:51.381 [2024-07-20 19:04:01.493507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.381 [2024-07-20 19:04:01.493532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.493767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.493806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.494077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.494102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.494312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.494337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.494577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.494602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.494839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.494866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.495103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.495129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.495343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.495369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.495608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.495635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.495904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.495930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.496192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.496217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.496481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.496506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.496750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.496776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.497018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.497044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.497281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.497307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.497545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.497571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.497787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.497822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.498062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.498087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.498350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.498375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.498595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.498620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.498860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.498886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.499101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.499126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.499392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.499422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.499651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.499676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.499915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.499941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.500175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.500201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.500439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.500464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.500675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.500700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.500961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.500987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.501240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.501265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.501502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.501527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.501780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.501815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.502067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.502092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.502327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.502352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.502566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.502591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.502852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.502878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.503100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.503126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.503338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.503363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.503634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.503659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.503921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.503957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.504167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.504193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.504407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.504432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.504648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.504675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.504919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.504944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.505210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.505236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.505446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.505471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.505740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.505765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.506034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.506060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.506276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.506301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.506512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.506542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.506782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.506815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.507034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.507060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.507332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.507357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.507596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.507622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.507857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.507884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.508095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.508120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.508329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.508355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.508590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.508615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.508869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.508895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.509112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.509137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.509396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.509421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.509684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.509709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.509949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.509974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.510218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.510243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.510501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.510527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.510762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.510787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.511062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.511087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.511350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.511376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.511618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.511643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.511886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.511912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.512127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.512152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.512399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.512424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.512688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.512713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.512926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.512951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.513161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.513186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.513428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.513453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.513728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.513753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.514009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.514035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.514272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.514297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.514516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.514543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.382 qpair failed and we were unable to recover it. 00:33:51.382 [2024-07-20 19:04:01.514784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.382 [2024-07-20 19:04:01.514816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.515053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.515078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.515311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.515336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.515575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.515602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.515814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.515840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.516080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.516105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.516343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.516368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.516628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.516654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.516866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.516892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.517127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.517153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.517407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.517438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.517645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.517671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.517908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.517934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.518153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.518178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.518419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.518446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.518693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.518718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.518937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.518963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.519204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.519229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.519496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.519521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.519770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.519801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.520011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.520037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.520298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.520323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.520565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.520590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.520804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.520830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.521050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.521075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.521293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.521318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.521556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.521581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.521848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.521874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.522110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.522135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.522373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.522398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.522619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.522644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.522888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.522914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.523164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.523189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.523422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.523447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.523660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.523685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.523928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.523954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.524193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.524218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.524455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.524486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.524755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.524780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.525003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.525028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.525248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.525274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.525509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.525534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.525778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.525810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.526032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.526057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.526297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.526323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.526562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.526588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.526837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.526863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.527083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.527108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.527347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.527372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.527589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.527614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.527835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.527861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.528079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.528104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.528311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.528338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.528598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.528623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.528844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.528870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.529128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.529153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.529390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.529415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.529651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.529676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.529939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.529965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.530202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.530226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.530466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.530491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.530709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.530734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.530953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.530979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.531227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.531252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.531537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.531562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.531778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.531809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.532025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.532050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.532269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.532294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.532557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.532582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.532832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.532857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.533121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.533147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.533394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.533419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.533665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.533690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.533929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.533955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.383 [2024-07-20 19:04:01.534166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.383 [2024-07-20 19:04:01.534191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.383 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.534393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.534418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.534660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.534686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.534894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.534921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.535162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.535191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.535454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.535480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.535706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.535731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.535940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.535968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.536205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.536230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.536464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.536489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.536728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.536755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.537029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.537055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.537266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.537292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.537525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.537550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.537788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.537819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.538030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.538057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.538289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.538316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.538533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.538558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.538815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.538842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.539086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.539112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.539335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.539359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.539608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.539633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.539857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.539883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.540124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.540149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.540385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.540410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.540613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.540638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.540874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.540900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.541136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.541161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.541375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.541400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.541663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.541688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.541925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.541951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.542193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.542223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.542468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.542493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.542733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.542758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.543007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.543033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.543239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.543264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.543481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.543506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.543716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.543741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.544014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.544040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.544309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.544335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.544570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.544595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.544838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.544864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.545121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.545146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.545374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.545400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.545623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.545648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.545885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.545912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.546181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.546206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.546442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.546467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.546731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.546756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.547012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.547038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.547252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.547277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.547519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.547544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.547780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.547812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.548058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.548083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.548320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.548345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.548558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.548583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.548818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.548844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.549117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.549142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.549413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.549438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.549661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.549686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.549953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.549979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.550229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.550254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.550468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.550493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.550756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.550781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.551016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.551041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.551247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.551273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.551542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.551567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.551802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.551827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.552106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.552131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.552346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.552371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.552628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.552653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.552895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.552925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.553191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.553221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.553438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.553464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.553680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.553707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.553950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.553976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.554192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.554217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.554428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.554453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.554662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.554687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.554904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.384 [2024-07-20 19:04:01.554930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.384 qpair failed and we were unable to recover it. 00:33:51.384 [2024-07-20 19:04:01.555143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.555168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.555376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.555401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.555638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.555663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.555872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.555899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.556137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.556162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.556376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.556402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.556649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.556674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.556938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.556965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.557203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.557228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.557466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.557492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.557699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.557725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.557943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.557968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.558189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.558214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.558458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.558484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.558723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.558749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.558989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.559015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.559247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.559273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.559489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.559515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.559759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.559784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.560025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.560050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.560296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.560322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.560524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.560549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.560791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.560823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.561037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.561062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.561298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.561323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.561577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.561602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.561844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.561871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.562105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.562130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.562339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.562364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.562576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.562601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.562819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.562844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.563065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.563090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.563302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.563327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.563540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.563565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.563777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.563811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.564024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.564049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.564292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.564318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.564547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.564572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.564788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.564819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.565051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.565076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.565310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.565335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.565554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.565578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.565784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.565816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.566039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.566063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.566300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.566325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.566589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.566614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.566837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.566863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.567078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.567103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.567314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.567339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.567572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.567596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.567809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.567836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.568102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.568127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.568347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.568373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.568598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.568623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.568871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.568896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.569184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.569209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.569417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.569442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.569679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.569706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.569913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.569939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.570154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.570179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.570399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.570428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.570662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.570688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.570951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.570976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.571186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.571212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.571453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.571478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.571737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.571762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.572021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.572046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.572279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.572305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.572563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.572588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.572804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.572830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.573046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.573071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.573280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.573305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.573517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.573542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.573755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.573780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.574009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.574034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.574253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.574278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.574515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.574539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.574821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.385 [2024-07-20 19:04:01.574846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.385 qpair failed and we were unable to recover it. 00:33:51.385 [2024-07-20 19:04:01.575055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.575082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.575326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.575352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.575574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.575599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.575813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.575839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.576080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.576106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.576314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.576341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.576552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.576577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.576813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.576839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.577060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.577085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.577295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.577321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.577573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.577598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.577841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.577866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.578085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.578110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.578340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.578365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.578629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.578653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.578870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.578896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.579178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.579202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.579440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.579465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.579705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.579729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.579946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.579971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.580183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.580209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.580427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.580453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.580705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.580730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.580968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.580993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.581210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.581234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.581506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.581531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.581768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.581804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.582024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.582050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.582283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.582308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.582539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.582564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.582810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.582835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.583059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.583084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.583291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.583317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.583519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.583544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.583748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.583773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.583997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.584022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.584297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.584322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.584568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.584594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.584809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.584835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.585084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.585109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.585326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.585352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.585570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.585595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.585830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.585856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.586070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.586095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.586303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.586328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.586563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.586588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.586822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.586849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.587060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.587085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.587331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.587356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.587609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.587634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.587881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.587910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.588132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.588157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.588401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.588426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.588663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.588688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.588966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.588992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.589230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.589256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.589460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.589485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.589723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.589748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.589970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.589997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.590206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.590231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.590508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.590533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.590765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.590790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.591011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.591036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.591269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.591294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.591513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.591538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.591786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.591818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.592067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.592092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.592361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.592386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.592596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.592622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.592862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.592887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.593098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.593124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.593363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.593389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.593624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.386 [2024-07-20 19:04:01.593649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.386 qpair failed and we were unable to recover it. 00:33:51.386 [2024-07-20 19:04:01.593864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.593891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.594155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.594179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.594391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.594417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.594671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.594697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.594934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.594960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.595177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.595203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.595439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.595465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.595705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.595731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.595986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.596012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.596225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.596251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.596471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.596497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.596715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.596742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.596987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.597013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.597217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.597243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.597478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.597503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.597717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.597742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.597968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.597994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.598202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.598227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.598442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.598473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.598753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.598778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.599008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.599033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.599267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.599292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.599497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.599522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.599761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.599786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.600023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.600048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.600282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.600307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.600542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.600568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.600804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.600830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.601041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.601065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.601328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.601353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.601565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.601589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.601812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.601838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.602089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.602114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.602381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.602406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.602622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.602648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.602889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.602914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.603124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.603149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.603392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.603422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.603728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.603753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.604029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.604055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.604295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.604320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.604559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.604584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.604807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.604833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.605042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.605067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.605309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.605334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.605587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.605617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.605836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.605861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.606079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.606104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.606340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.606365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.606645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.606670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.606913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.606938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.607152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.607178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.607393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.607418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.607717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.607744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.607983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.608009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.608244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.608269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.608502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.608527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.608729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.608754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.609052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.609078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.609354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.609380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.609636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.609661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.609901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.609928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.610170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.610195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.610431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.610456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.610673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.610698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.610937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.610963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.611236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.611261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.611471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.611496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.611727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.611752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.611975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.612002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.612208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.612233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.612485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.612510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.612719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.387 [2024-07-20 19:04:01.612744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.387 qpair failed and we were unable to recover it. 00:33:51.387 [2024-07-20 19:04:01.612967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.612992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.613204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.613230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.613445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.613470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.613706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.613731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.613936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.613962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.614168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.614193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.614407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.614432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.614644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.614669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.614914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.614940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.615157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.615182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.615420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.615446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.615659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.615684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.615947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.615972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.616186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.616215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.616449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.616474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.616710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.616735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.616968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.616995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.617228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.617253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.617497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.617524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.617768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.617804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.618057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.618083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.618348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.618373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.618606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.618632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.618866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.618893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.619135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.619160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.619365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.619390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.619600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.619627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.619874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.619900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.620141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.620166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.620406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.620431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.620646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.620671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.620914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.620940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.621180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.621205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.621445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.621470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.621678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.621704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.621973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.621998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.622237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.622263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.622496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.622521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.622738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.622776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.623065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.623091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.623327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.623356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.623572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.623612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.623885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.623911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.624149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.624174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.624398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.624423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.624658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.624684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.624926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.624951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.625190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.625216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.625454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.625479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.625718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.625743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.625983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.626009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.626294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.626319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.626673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.626712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.626974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.627001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.627241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.627267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.627508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.627533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.627802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.627828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.628050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.628074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.628307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.628332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.628546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.628573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.628841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.628867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.629077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.629116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.629356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.629382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.629636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.629661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.629878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.629905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.630141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.388 [2024-07-20 19:04:01.630167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.388 qpair failed and we were unable to recover it. 00:33:51.388 [2024-07-20 19:04:01.630409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.630435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.630729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.630769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.631012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.631038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.631280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.631305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.631533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.631558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.631810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.631837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.632054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.632079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.632281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.632308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.632546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.632571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.632836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.632862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.633108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.633133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.633374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.633399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.633608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.633634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.633849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.633875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.634117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.634142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.634429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.634458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.634762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.634787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.635031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.635057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.635285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.635310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.635530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.635555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.635800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.635827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.636066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.636091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.636355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.636380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.636616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.636641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.636851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.636877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.637081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.637106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.637337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.637362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.637597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.637623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.637861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.637887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.638100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.638126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.638370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.638395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.638652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.638677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.638913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.638939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.639162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.639187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.639421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.639446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.639652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.639677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.639913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.639939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.640170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.640195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.640430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.640455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.640720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.640745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.640957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.640983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.641193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.641219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.641455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.641481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.641751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.641776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.641994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.642019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.642234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.642261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.642460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.642486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.642706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.642732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.642973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.642999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.643241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.643266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.643504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.643529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.643744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.643769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.644016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.644042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.644283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.644308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.644546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.644572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.644780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.644813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.645066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.645092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.645327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.645352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.645585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.645610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.645865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.645891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.646094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.646119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.646334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.646359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.646570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.646595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.646858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.646884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.647152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.647176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.647427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.647452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.647693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.647718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.647978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.648004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.648244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.648269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.648504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.648530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.648771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.648804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.649018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.649043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.649258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.649283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.649522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.649547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.649788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.649820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.650094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.650118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.650385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.650410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.650651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.389 [2024-07-20 19:04:01.650676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.389 qpair failed and we were unable to recover it. 00:33:51.389 [2024-07-20 19:04:01.650942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.650968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.651201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.651226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.651496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.651521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.651760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.651785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.652027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.652053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.652281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.652312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.652527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.652552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.652855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.652881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.653122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.653148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.653395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.653420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.653685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.653709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.653953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.653979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.654199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.654224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.654467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.654492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.654727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.654752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.655011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.655037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.655277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.655302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.655518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.655545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.655785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.655818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.656063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.656088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.656303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.656329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.656540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.656566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.656805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.656832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.657077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.657104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.657321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.657346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.657579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.657604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.657869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.657895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.658106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.658133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.658399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.658425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.658662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.658687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.658906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.658933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.659141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.659166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.659438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.659463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.659730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.659755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.659993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.660020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.660289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.660314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.660517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.660542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.660772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.660804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.661067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.661092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.661334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.661360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.661627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.661652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.661888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.661914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.662147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.662173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.662433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.662457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.662659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.662684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.662940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.662966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.663213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.663239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.663473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.663498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.663738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.663763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.663987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.664013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.664209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.664235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.664440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.664466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.664674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.664699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.664915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.664941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.665157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.665182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.665417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.665442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.665657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.665682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.665945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.665970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.666237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.666262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.666473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.666499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.666736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.666761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.667032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.667057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.667278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.667304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.667537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.667562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.667797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.667823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.668089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.668114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.668377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.668402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.668604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.668629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.668890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.390 [2024-07-20 19:04:01.668915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.390 qpair failed and we were unable to recover it. 00:33:51.390 [2024-07-20 19:04:01.669179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.669204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.669445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.669470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.669706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.669731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.669981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.670007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.670251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.670282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.670551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.670577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.670861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.670887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.671153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.671178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.671395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.671420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.671654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.671680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.671890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.671916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.672128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.672153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.672361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.672386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.672589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.672614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.672850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.672876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.673097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.673121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.673362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.673387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.673592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.673617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.673834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.673861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.674146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.674170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.674380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.674405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.674644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.674669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.674938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.674964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.675183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.675209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.675484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.675509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.675747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.675772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.675996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.676021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.676254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.676279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.676540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.676565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.676809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.676835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.677047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.677072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.677310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.677335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.677578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.677604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.677853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.677880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.678121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.678148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.678414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.678440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.678655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.678680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.678931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.678957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.679193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.679219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.679454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.679481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.679717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.679754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.680033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.680061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.680298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.680324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.680586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.680613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.680887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.680913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.681156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.681186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.681435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.681467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.681722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.681751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.681979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.682006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.682245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.682270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.682511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.682536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.682805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.682831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.683109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.683136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.391 [2024-07-20 19:04:01.683354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.391 [2024-07-20 19:04:01.683383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.391 qpair failed and we were unable to recover it. 00:33:51.677 [2024-07-20 19:04:01.683600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.677 [2024-07-20 19:04:01.683633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.677 qpair failed and we were unable to recover it. 00:33:51.677 [2024-07-20 19:04:01.683932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.677 [2024-07-20 19:04:01.683958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.677 qpair failed and we were unable to recover it. 00:33:51.677 [2024-07-20 19:04:01.684225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.677 [2024-07-20 19:04:01.684250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.677 qpair failed and we were unable to recover it. 00:33:51.677 [2024-07-20 19:04:01.684502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.684527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.684734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.684760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.685011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.685037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.685259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.685285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.685498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.685523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.685744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.685770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.685993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.686019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.686233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.686258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.686499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.686524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.686762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.686787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.687064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.687090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.687319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.687344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.687566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.687593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.687816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.687842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.688048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.688073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.688311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.688341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.688578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.688603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.688866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.688893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.689099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.689124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.689344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.689369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.689578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.689603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.689841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.689867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.690077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.690104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.690305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.690331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.690615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.690640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.690855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.690881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.691098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.691125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.691363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.691388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.691591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.691617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.691843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.691870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.692134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.692159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.692374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.692401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.692640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.692665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.692923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.692948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.693161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.693187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.693454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.693480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.693742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.693768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.694033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.694059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.694305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.694330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.694553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.694578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.694861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.694887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.695118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.695143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.695409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.695434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.695679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.695704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.695950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.695976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.696209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.696235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.696439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.696464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.696722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.696747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.696953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.696978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.697248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.697273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.697508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.697532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.697743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.697768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.697990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.698016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.698253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.698279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.698542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.698567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.698847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.698873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.678 [2024-07-20 19:04:01.699214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.678 [2024-07-20 19:04:01.699260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.678 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.699536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.699563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.699865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.699892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.700138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.700164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.700370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.700395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.700663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.700688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.700935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.700961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.701248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.701273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.701572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.701597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.701840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.701867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.702086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.702112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.702345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.702370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.702607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.702633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.702870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.702896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.703144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.703169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.703398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.703423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.703658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.703683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.703914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.703940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.704184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.704210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.704448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.704473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.704691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.704716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.704966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.704992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.705219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.705244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.705527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.705553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.705790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.705821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.706040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.706065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.706329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.706354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.706594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.706625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.706843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.706869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.707105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.707130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.707334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.707359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.707575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.707600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.707842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.707868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.708114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.708140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.708345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.708372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.708619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.708645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.708867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.708892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.709108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.709133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.709407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.709432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.709642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.709667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.709878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.709904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.679 qpair failed and we were unable to recover it. 00:33:51.679 [2024-07-20 19:04:01.710128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.679 [2024-07-20 19:04:01.710154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.710370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.710395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.710595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.710620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.710864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.710890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.711104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.711128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.711349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.711374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.711619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.711644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.711854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.711894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.712137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.712162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.712407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.712432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.712646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.712671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.712936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.712962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.713195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.713220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.713481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.713506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.713752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.713777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.714052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.714077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.714317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.714343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.714597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.714621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.714833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.714858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.715078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.715105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.715342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.715367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.715601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.715627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.715896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.715922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.716177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.716210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.716460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.716486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.716719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.716745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.716987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.717015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.717287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.717329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.717594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.717631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.717917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.717944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.718205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.718231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.718449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.718474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.718724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.718749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.719047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.719073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.719316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.719341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.719584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.719610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.719851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.719878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.720129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.720154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.720381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.720406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.720647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.720673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.720911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.720937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.680 [2024-07-20 19:04:01.721148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.680 [2024-07-20 19:04:01.721174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.680 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.721417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.721442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.721682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.721707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.721961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.721987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.722203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.722228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.722466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.722492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.722707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.722732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.722938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.722964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.723167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.723193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.723394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.723419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.723657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.723682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.723924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.723951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.724160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.724185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.724394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.724423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.724669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.724696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.724944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.724970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.725250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.725275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.725514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.725540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.725805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.725831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.726070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.726098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.726365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.726390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.726658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.726683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.726905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.726931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.727159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.727184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.727447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.727473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.727718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.727744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.727966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.727992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.728271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.728297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.728537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.728562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.728772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.728802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.729059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.729084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.729313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.729338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.729582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.729607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.729867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.681 [2024-07-20 19:04:01.729893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.681 qpair failed and we were unable to recover it. 00:33:51.681 [2024-07-20 19:04:01.730103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.730128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.730392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.730417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.730684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.730709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.730929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.730954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.731198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.731224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.731493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.731518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.731737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.731762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.732037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.732063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.732311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.732336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.732586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.732611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.732845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.732871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.733086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.733112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.733377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.733402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.733650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.733675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.733932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.733958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.734177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.734203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.734419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.734444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.734707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.734733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.734986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.735012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.735271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.735297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.735540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.735569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.735814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.735840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.736047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.736072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.736340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.736365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.736589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.736614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.736849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.736876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.737123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.737148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.737411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.737436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.737681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.737706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.737942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.737968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.738186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.738212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.738461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.738486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.738732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.738757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.739009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.739035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.739282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.739309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.739561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.739587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.739856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.739883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.740096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.740121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.740405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.740430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.740669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.740695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.740965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.740991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.682 qpair failed and we were unable to recover it. 00:33:51.682 [2024-07-20 19:04:01.741240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.682 [2024-07-20 19:04:01.741265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.741518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.741544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.741782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.741813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.742086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.742112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.742379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.742405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.742644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.742670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.742870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.742895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.743111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.743137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.743402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.743428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.743664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.743691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.743937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.743964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.744239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.744264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.744482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.744508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.744755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.744782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.745038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.745064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.745316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.745340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.745582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.745607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.745848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.745874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.746094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.746120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.746365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.746390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.746619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.746656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.746880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.746906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.747155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.747181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.747429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.747455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.747669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.747695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.747942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.747968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.748229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.748255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.748529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.748554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.748778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.748815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.749036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.749061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.749307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.749333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.749578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.749603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.749850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.749876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.750091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.750116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.750359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.750384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.750621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.750647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.750882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.750908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.751128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.751154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.751421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.751446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.751684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.751710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.751921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.751947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.683 [2024-07-20 19:04:01.752214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.683 [2024-07-20 19:04:01.752239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.683 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.752478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.752505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.752746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.752772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.753021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.753046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.753293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.753319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.753581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.753606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.753844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.753874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.754121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.754148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.754411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.754436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.754677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.754704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.754952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.754978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.755212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.755237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.755454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.755480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.755716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.755741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.755958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.755984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.756228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.756253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.756471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.756496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.756732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.756757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.757014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.757041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.757288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.757313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.757591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.757617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.757850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.757877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.758123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.758148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.758417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.758442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.758691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.758716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.758959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.758984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.759203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.759230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.759475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.759500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.759733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.759759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.760031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.760057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.760275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.760302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.760544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.760569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.760817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.760844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.761092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.761117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.761362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.761387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.761616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.761642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.761880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.761906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.762110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.762135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.762342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.762375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.762616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.762641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.762953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.762978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.763234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.684 [2024-07-20 19:04:01.763260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.684 qpair failed and we were unable to recover it. 00:33:51.684 [2024-07-20 19:04:01.763499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.763525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.763785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.763824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.764116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.764142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.764382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.764408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.764617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.764642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.764849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.764879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.765099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.765126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.765391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.765417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.765634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.765660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.765903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.765929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.766177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.766202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.766479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.766504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.766772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.766802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.767025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.767050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.767287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.767313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.767581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.767606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.767846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.767871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.768115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.768140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.768378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.768403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.768676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.768701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.768944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.768970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.769189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.769215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.769454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.769479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.769703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.769729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.769953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.769979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.770188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.770214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.770451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.770477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.770692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.770717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.770967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.770993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.771237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.771263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.771508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.771533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.771815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.771841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.772059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.772088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.772305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.772332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.772576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.772602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.772838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.772864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.773085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.773110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.773336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.773361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.773604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.773629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.773847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.773873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.685 [2024-07-20 19:04:01.774110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.685 [2024-07-20 19:04:01.774135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.685 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.774363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.774388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.774608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.774633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.774859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.774885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.775102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.775127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.775331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.775357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.775596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.775622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.775897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.775923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.776163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.776188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.776419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.776444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.776658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.776684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.776930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.776955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.777168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.777194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.777452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.777478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.777710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.777735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.777955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.777983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.778221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.778247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.778480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.778505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.778744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.778770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.779012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.779038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.779258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.779284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.779522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.779547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.779813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.779839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.780106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.780131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.780383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.780409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.780678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.780703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.780916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.780942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.781167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.781192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.781425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.781450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.781691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.781716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.781923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.781949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.782210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.782235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.782459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.782485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.782752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.782782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.783007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.783032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.783250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.783276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.686 qpair failed and we were unable to recover it. 00:33:51.686 [2024-07-20 19:04:01.783514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.686 [2024-07-20 19:04:01.783539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.783773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.783803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.784022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.784047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.784299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.784324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.784542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.784567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.784811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.784837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.785055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.785081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.785291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.785316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.785581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.785606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.785849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.785875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.786161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.786186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.786434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.786459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.786693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.786718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.786961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.786987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.787227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.787253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.787488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.787513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.787745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.787771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.787998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.788024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.788238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.788263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.788501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.788526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.788735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.788761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.789007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.789033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.789271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.789297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.789522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.789547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.789786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.789823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.790030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.790055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.790291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.790316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.790584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.790609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.790822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.790848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.791090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.791115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.791335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.791360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.791610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.791635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.791879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.791906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.792150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.792176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.792444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.792470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.792746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.792771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.792996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.793022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.793268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.793294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.793516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.793543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.793806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.793833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.794075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.794101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.687 [2024-07-20 19:04:01.794344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.687 [2024-07-20 19:04:01.794370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.687 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.794609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.794634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.794885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.794912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.795176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.795201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.795435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.795460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.795679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.795704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.795995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.796022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.796259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.796285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.796504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.796529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.796775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.796806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.797051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.797076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.797315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.797340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.797575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.797600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.797847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.797872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.798089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.798114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.798374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.798400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.798610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.798636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.798851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.798878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.799097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.799123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.799361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.799387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.799622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.799647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.799893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.799919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.800158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.800187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.800453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.800478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.800746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.800775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.801048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.801073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.801350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.801375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.801648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.801673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.801948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.801973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.802248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.802274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.802528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.802553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.802818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.802845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.803098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.803124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.803364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.803389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.803624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.803651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.803863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.803890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.804128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.804153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.804398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.804423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.804660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.804686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.804904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.804930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.805148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.805173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.688 [2024-07-20 19:04:01.805410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.688 [2024-07-20 19:04:01.805435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.688 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.805674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.805699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.805904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.805930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.806136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.806161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.806402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.806427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.806665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.806690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.806925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.806951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.807162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.807187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.807430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.807454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.807670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.807696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.807941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.807970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.808183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.808208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.808412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.808437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.808675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.808700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.808940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.808966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.809199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.809224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.809460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.809485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.809723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.809748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.809992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.810018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.810265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.810290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.810523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.810548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.810763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.810788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.811008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.811033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.811283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.811310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.811558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.811584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.811806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.811834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.812101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.812126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.812366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.812392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.812643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.812669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.812889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.812916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.813125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.813151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.813414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.813440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.813660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.813685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.813927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.813953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.814196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.814221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.814432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.814457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.814703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.814728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.814953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.814979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.815249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.815278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.815544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.815569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.815815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.815863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.816071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.689 [2024-07-20 19:04:01.816097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.689 qpair failed and we were unable to recover it. 00:33:51.689 [2024-07-20 19:04:01.816336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.816362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.816581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.816606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.816844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.816870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.817133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.817158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.817383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.817408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.817621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.817646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.817854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.817880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.818088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.818115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.818355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.818380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.818634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.818663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.818872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.818898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.819119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.819146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.819384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.819409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.819672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.819698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.819944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.819970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.820183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.820208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.820475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.820500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.820703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.820729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.820967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.820994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.821259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.821284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.821519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.821546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.821816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.821842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.822063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.822088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.822331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.822357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.822592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.822617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.822840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.822868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.823115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.823140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.823348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.823373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.823579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.823604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.823871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.823897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.824140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.824165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.824388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.824413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.824663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.824688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.824905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.824931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.825168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.825193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.825415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.690 [2024-07-20 19:04:01.825441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.690 qpair failed and we were unable to recover it. 00:33:51.690 [2024-07-20 19:04:01.825695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.825720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.825965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.825992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.826202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.826227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.826438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.826463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.826715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.826740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.826976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.827002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.827251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.827276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.827509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.827534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.827813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.827839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.828093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.828118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.828330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.828355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.828593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.828618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.828858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.828884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.829110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.829135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.829349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.829374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.829593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.829618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.829845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.829871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.830113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.830138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.830408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.830433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.830664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.830689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.830908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.830935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.831143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.831168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.831383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.831408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.831615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.831640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.831909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.831935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.832172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.832197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.832420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.832445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.832705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.832731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.832958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.832984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.833226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.833251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.833517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.833542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.833757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.833782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.834056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.834082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.834324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.834349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.834559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.834584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.834834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.834859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.835184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.835211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.835482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.835508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.835761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.835786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.836043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.836069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.836313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.691 [2024-07-20 19:04:01.836339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.691 qpair failed and we were unable to recover it. 00:33:51.691 [2024-07-20 19:04:01.836551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.836580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.836784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.836817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.837035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.837062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.837323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.837349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.837595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.837620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.837860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.837886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.838130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.838155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.838391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.838416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.838652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.838677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.838910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.838935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.839196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.839222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.839436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.839461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.839674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.839701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.839916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.839942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.840208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.840233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.840507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.840533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.840747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.840773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.841014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.841040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.841353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.841378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.841644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.841670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.841917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.841944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.842163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.842188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.842424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.842450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.842714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.842739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.842953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.842980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.843223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.843249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.843518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.843543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.843829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.843855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.844104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.844129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.844367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.844392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.844648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.844674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.844950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.844976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.845239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.845263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.845488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.845513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.845762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.845788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.846011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.846036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.846271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.846297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.846512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.846538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.846755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.846780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.847005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.847030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.847258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.847283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.692 [2024-07-20 19:04:01.847505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.692 [2024-07-20 19:04:01.847532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.692 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.847762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.847788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.848069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.848095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.848340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.848365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.848606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.848631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.848840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.848867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.849129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.849154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.849401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.849426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.849642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.849668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.849899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.849925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.850181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.850206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.850445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.850470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.850724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.850750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.850979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.851005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.851245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.851270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.851489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.851514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.851753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.851778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.852028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.852053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.852293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.852317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.852561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.852586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.852805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.852831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.853067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.853092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.853369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.853394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.853611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.853636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.853853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.853879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.854142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.854167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.854406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.854431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.854677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.854705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.854945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.854971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.855205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.855230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.855492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.855517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.855776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.855805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.856047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.856072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.856286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.856311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.856533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.856558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.856821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.856848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.857085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.857112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.857375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.857401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.857651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.857677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.857941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.857967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.858211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.858236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.858481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.693 [2024-07-20 19:04:01.858506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.693 qpair failed and we were unable to recover it. 00:33:51.693 [2024-07-20 19:04:01.858719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.858745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.858978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.859004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.859218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.859243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.859506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.859532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.859774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.859804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.860049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.860074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.860319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.860346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.860653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.860678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.860950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.860976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.861222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.861248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.861485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.861512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.861778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.861809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.862079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.862104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.862327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.862353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.862595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.862621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.862838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.862865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.863110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.863135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.863353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.863379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.863595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.863620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.863867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.863893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.864158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.864184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.864406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.864432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.864645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.864670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.864910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.864936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.865181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.865207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.865414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.865441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.865681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.865711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.865961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.865988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.866232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.866257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.866498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.866523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.866771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.866802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.867039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.867064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.867300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.867325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.867560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.867585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.867856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.867881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.868128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.694 [2024-07-20 19:04:01.868154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.694 qpair failed and we were unable to recover it. 00:33:51.694 [2024-07-20 19:04:01.868376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.868401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.868612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.868638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.868855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.868883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.869089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.869116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.869337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.869362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.869565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.869591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.869798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.869824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.870089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.870114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.870354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.870379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.870614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.870640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.870896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.870922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.871190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.871215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.871456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.871481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.871722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.871747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.871999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.872025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.872231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.872257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.872470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.872495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.872714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.872744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.873012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.873038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.873280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.873306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.873567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.873592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.873838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.873865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.874082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.874109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.874352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.874378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.874626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.874652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.874870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.874896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.875113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.875138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.875347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.875373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.875582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.875608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.875842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.875868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.876095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.876121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.876390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.876416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.876642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.876667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.876882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.876908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.877124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.877149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.877387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.877412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.877654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.877679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.877918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.877946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.878194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.878219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.878424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.878450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.878662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.878688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.695 qpair failed and we were unable to recover it. 00:33:51.695 [2024-07-20 19:04:01.878930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.695 [2024-07-20 19:04:01.878956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.879194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.879220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.879466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.879492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.879698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.879723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.879971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.879998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.880236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.880262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.880500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.880526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.880759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.880786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.881023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.881049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.881352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.881376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.881640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.881666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.881919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.881946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.882168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.882193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.882436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.882462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.882728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.882753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.882969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.882995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.883263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.883288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.883505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.883534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.883780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.883811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.884058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.884085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.884320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.884346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.884558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.884584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.884829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.884855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.885066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.885092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.885329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.885354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.885600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.885627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.885858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.885885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.886108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.886133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.886351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.886377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.886595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.886620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.886860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.886886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.887135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.887160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.887384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.887410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.887640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.887665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.887907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.887934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.888151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.888177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.888388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.888413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.888655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.888681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.888926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.888952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.889181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.889207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.889476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.889501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.696 [2024-07-20 19:04:01.889763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.696 [2024-07-20 19:04:01.889788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.696 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.890034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.890059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.890269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.890294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.890558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.890587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.890803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.890829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.891067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.891092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.891326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.891351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.891623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.891648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.891915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.891941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.892191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.892216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.892455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.892480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.892719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.892744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.893010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.893036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.893285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.893311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.893545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.893570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.893835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.893861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.894082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.894107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.894350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.894375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.894616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.894641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.894858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.894883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.895124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.895149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.895385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.895410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.895676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.895701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.895965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.895991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.896234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.896260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.896532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.896557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.896800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.896826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.897044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.897070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.897306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.897331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.897566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.897591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.897831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.897856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.898081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.898107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.898344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.898369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.898602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.898627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.898889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.898915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.899155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.899181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.899443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.899467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.899710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.899735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.899979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.900006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.900244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.900269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.900478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.900505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.900739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.697 [2024-07-20 19:04:01.900765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.697 qpair failed and we were unable to recover it. 00:33:51.697 [2024-07-20 19:04:01.900998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.901024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.901264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.901289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.901547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.901576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.901818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.901843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.902044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.902069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.902322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.902347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.902560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.902585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.902817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.902843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.903118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.903143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.903401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.903426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.903634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.903659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.903922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.903948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.904192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.904217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.904461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.904486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.904746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.904771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.905018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.905045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.905314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.905339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.905610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.905635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.905881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.905907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.906122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.906147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.906422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.906447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.906691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.906715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.906957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.906983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.907218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.907243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.907487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.907512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.907729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.907754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.907996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.908023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.908265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.908291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.908531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.908557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.908814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.908846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.909069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.909096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.909360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.909385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.909617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.909642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.909882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.909908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.910132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.910157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.910368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.910393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.910654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.910680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.910942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.910967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.698 qpair failed and we were unable to recover it. 00:33:51.698 [2024-07-20 19:04:01.911189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.698 [2024-07-20 19:04:01.911214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.911474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.911500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.911766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.911797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.912047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.912072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.912287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.912312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.912528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.912553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.912770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.912801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.913045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.913070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.913285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.913309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.913548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.913573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.913840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.913866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.914075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.914100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.914335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.914361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.914623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.914648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.914910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.914936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.915180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.915205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.915467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.915492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.915764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.915790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.916069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.916095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.916350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.916375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.916621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.916646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.916852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.916878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.917132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.917157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.917420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.917445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.917665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.917690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.917956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.917984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.918226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.918252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.918464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.918489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.918756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.918781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.919009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.919035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.919277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.919303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.919524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.919549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.919789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.919825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.920054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.920081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.920315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.920341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.920542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.920568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.920834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.920860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.921072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.921099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.921357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.699 [2024-07-20 19:04:01.921383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.699 qpair failed and we were unable to recover it. 00:33:51.699 [2024-07-20 19:04:01.921593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.921618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.921826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.921863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.922073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.922097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.922336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.922361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.922604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.922629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.922848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.922874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.923113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.923138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.923411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.923436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.923710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.923735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.923943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.923969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.924215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.924240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.924471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.924496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.924740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.924765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.924987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.925013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.925259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.925285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.925498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.925525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.925765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.925790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.926054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.926080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.926293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.926318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.926578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.926603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.926838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.926867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.927091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.927116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.927384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.927409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.927624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.927649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.927872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.927898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.928118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.928143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.928400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.928425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.928643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.928669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.928879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.928905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.929125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.929149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.929361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.929387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.929651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.929677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.929948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.929974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.930256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.930282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.930535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.930561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.930773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.930805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.931017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.931042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.931250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.931275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.931481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.931507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.931725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.931750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.932003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.932030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.700 qpair failed and we were unable to recover it. 00:33:51.700 [2024-07-20 19:04:01.932332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.700 [2024-07-20 19:04:01.932357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.932600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.932625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.932866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.932892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.933111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.933136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.933376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.933401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.933652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.933677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.933949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.933975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.934257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.934282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.934552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.934577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.934816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.934842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.935079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.935104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.935320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.935345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.935582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.935609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.935855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.935882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.936130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.936155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.936377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.936403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.936670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.936696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.936939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.936965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.937179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.937204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.937419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.937444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.937680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.937711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.937952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.937978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.938250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.938276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.938486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.938513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.938779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.938810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.939058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.939084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.939314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.939339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.939555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.939581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.939823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.939858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.940099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.940124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.940335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.940361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.940565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.940591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.940863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.940889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.941137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.941163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.941439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.941464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.941693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.941719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.942002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.942028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.942294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.942319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.942531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.942556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.942802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.942828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.943037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.943062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.701 qpair failed and we were unable to recover it. 00:33:51.701 [2024-07-20 19:04:01.943297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.701 [2024-07-20 19:04:01.943322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.943558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.943583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.943826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.943853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.944075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.944101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.944318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.944343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.944576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.944602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.944892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.944918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.945168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.945194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.945404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.945429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.945670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.945695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.945928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.945954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.946214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.946239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.946481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.946506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.946757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.946782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.947037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.947062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.947301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.947326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.947568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.947593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.947812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.947838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.948057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.948082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.948295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.948320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.948534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.948564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.948777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.948816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.949057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.949083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.949322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.949347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.949561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.949586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.949822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.949852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.950094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.950119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.950351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.950376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.950641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.950666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.950908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.950934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.951206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.951231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.951477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.951502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.951756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.951781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.952034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.952060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.952277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.952303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.952555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.952581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.952827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.952855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.953068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.953093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.953356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.953381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.953621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.953647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.953915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.953941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.702 qpair failed and we were unable to recover it. 00:33:51.702 [2024-07-20 19:04:01.954192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.702 [2024-07-20 19:04:01.954217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.954459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.954483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.954747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.954771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.955022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.955048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.955283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.955308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.955553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.955578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.955862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.955892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.956130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.956154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.956388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.956413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.956685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.956710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.956955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.956982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.957198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.957223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.957485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.957510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.957782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.957815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.958085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.958110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.958341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.958365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.958602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.958627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.958852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.958878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.959124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.959149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.959365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.959391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.959659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.959684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.959908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.959935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.960157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.960182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.960403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.960428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.960666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.960691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.960924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.960950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.961166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.961191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.961402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.961427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.961687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.961711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.961951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.961978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.962272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.962298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.962520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.962544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.962817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.962847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.963063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.963088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.963298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.963322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.963582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.703 [2024-07-20 19:04:01.963607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.703 qpair failed and we were unable to recover it. 00:33:51.703 [2024-07-20 19:04:01.963824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.963850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.964074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.964099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.964342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.964366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.964603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.964628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.964849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.964875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.965111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.965137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.965370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.965394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.965639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.965664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.965884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.965910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.966120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.966145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.966393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.966418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.966665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.966694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.966914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.966939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.967161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.967186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.967387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.967412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.967675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.967700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.967937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.967963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.968232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.968257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.968504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.968529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.968746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.968771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.969018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.969044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.969307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.969332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.969551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.969576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.969814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.969839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.970080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.970105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.970350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.970375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.970610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.970635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.970853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.970879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.971114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.971139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.971347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.971372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.971642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.971667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.971929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.971955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.972193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.972218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.972461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.972487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.972757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.972782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.973038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.973063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.973329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.973354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.973626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.973651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.973926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.973955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.974227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.974251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.974517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.704 [2024-07-20 19:04:01.974542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.704 qpair failed and we were unable to recover it. 00:33:51.704 [2024-07-20 19:04:01.974806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.974832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.975075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.975100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.975337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.975364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.975575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.975600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.975865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.975891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.976109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.976134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.976350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.976376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.976612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.976637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.976874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.976900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.977139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.977164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.977398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.977424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.977666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.977692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.977959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.977985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.978258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.978283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.978503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.978528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.978767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.978798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.979036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.979061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.979296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.979322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.979564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.979589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.979830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.979857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.980078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.980104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.980347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.980373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.980577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.980602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.980823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.980850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.981057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.981083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.981320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.981345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.981584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.981609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.981826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.981852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.982064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.982091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.982355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.982391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.982679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.982707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.982975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.983001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.983234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.983259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.983501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.983526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.983788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.983820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.984061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.984089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.984328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.984356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.984662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.984687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.984928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.984959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.985199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.985224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.985474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.705 [2024-07-20 19:04:01.985499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.705 qpair failed and we were unable to recover it. 00:33:51.705 [2024-07-20 19:04:01.985711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.706 [2024-07-20 19:04:01.985738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.706 qpair failed and we were unable to recover it. 00:33:51.706 [2024-07-20 19:04:01.985982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.706 [2024-07-20 19:04:01.986011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.706 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.986275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.986301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.986731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.986757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.986977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.987003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.987243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.987268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.987500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.987525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.987761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.987786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.988015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.988041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.988242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.988268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.988509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.988534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.988778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.988809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.989031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.989056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.989294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.989319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.989538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.989563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.989803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.989829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.990050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.990075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.990310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.990335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.990575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.990600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.990839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.990865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.991079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.991104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.991316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.991341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.991580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.991604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.991877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.991902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.992122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.992152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.992394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.992418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.992668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.992693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.992933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.992958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.993179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.993204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.993426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.993451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.993692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.993716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.993957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.993983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.994226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.994252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.994523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.994547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.994786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.994817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.995031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.976 [2024-07-20 19:04:01.995056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.976 qpair failed and we were unable to recover it. 00:33:51.976 [2024-07-20 19:04:01.995298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.995323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.995571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.995596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.995844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.995871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.996111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.996136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.996376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.996401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.996601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.996626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.996870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.996895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.997099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.997123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.997338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.997362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.997629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.997654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.997870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.997897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.998120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.998164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.998460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.998485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.998730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.998758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.999024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.999049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.999320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.999348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.999612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.999637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:01.999923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:01.999952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.000217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.000246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.000551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.000609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.000856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.000882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.001179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.001207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.001471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.001498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.001786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.001819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.002064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.002089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.002358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.002386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.002644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.002673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.002939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.002966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.003180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.003205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.003529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.003597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.003864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.003892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.004172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.004198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.004433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.004458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.004762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.004789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.005074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.005102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.005594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.005645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.005888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.005917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.006163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.006191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.006475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.006503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.006912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.006940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.007225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.007249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.007534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.007561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.007826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.007855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.008119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.008148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.008416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.008441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.008746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.008774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.009057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.009085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.009567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.009616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.009912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.009938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.010253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.010282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.010532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.010557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.010846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.010875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.011179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.011204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.011470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.011495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.011779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.011816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.012056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.012081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.012336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.012361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.012671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.012696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.012951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.012980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.013240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.013265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.013531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.013556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.013805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.013833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.014093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.014121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.014627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.014676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.014965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.014991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.015297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.015325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.015561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.015589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.015856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.977 [2024-07-20 19:04:02.015885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.977 qpair failed and we were unable to recover it. 00:33:51.977 [2024-07-20 19:04:02.016155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.016180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.016459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.016486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.016761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.016789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.017077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.017105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.017390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.017414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.017692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.017720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.018006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.018047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.018418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.018478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.018735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.018760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.019010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.019038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.019338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.019377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.019624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.019649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.019885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.019910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.020237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.020261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.020547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.020575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.020856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.020884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.021175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.021200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.021514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.021539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.021806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.021833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.022052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.022076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.022322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.022347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.022591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.022619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.022911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.022960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.023353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.023397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.023701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.023729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.024060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.024091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.024363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.024391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.024854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.024885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.025195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.025221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.025506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.025540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.025812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.025839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.026158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.026187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.026485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.026511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.026815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.026848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.027128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.027155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.027636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.027685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.027953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.027979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.028231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.028261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.028554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.028582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.028873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.028914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.029195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.029220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.029592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.029648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.029943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.029972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.030354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.030379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.030650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.030679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.030958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.030988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.031260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.031287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.031505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.031534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.031803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.031830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.032132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.032158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.032432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.032460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.032905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.032942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.033242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.033267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.033536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.033564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.033810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.033840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.034108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.034136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.034417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.034443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.034713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.034742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.035031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.035058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.035650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.035700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.036017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.036044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.036344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.036370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.036640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.036668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.036928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.036957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.037218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.037243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.037521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.037548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.037817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.978 [2024-07-20 19:04:02.037846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.978 qpair failed and we were unable to recover it. 00:33:51.978 [2024-07-20 19:04:02.038146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.038175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.038457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.038481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.038733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.038758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.039040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.039066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.039632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.039683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.039979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.040005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.040308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.040335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.040623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.040651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.040979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.041010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.041308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.041333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.041683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.041711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.041990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.042019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.042565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.042615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.042902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.042928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.043176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.043206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.043473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.043501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.043876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.043905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.044164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.044189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.044461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.044489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.044777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.044815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.045093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.045119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.045394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.045435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.045727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.045755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.046124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.046153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.046489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.046533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.046858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.046899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.047133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.047164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.047437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.047465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.047905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.047934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.048285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.048338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.048635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.048671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.048946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.048975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.049375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.049430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.049719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.049745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.050004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.050033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.050262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.050290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.050686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.050733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.050992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.051018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.051291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.051319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.051586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.051614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.051908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.051937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.052171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.052196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.052467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.052495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.052790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.052838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.053116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.053144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.053439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.053478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.053789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.053825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.054124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.054152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.054468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.054498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.054764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.054790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.055039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.055065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.055281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.055323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.055791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.055849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.056113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.056153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.056403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.056428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.056649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.056690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.056958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.056988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.057261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.057286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.057564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.057592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.979 [2024-07-20 19:04:02.057862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.979 [2024-07-20 19:04:02.057891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.979 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.058166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.058206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.058465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.058490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.058766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.058800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.059070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.059099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.059513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.059558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.059844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.059869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.060079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.060119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.060383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.060412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.060689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.060713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.060913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.060939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.061188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.061214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.061512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.061545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.061788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.061838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.062082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.062108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.062362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.062390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.062655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.062683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.062950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.062979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.063267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.063292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.063595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.063623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.063894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.063923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.064194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.064222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.064510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.064535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.064839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.064868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.065114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.065142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.065609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.065639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.065947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.065973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.066272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.066300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.066566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.066595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.066831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.066860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.067135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.067160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.067470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.067498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.067787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.067821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.068093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.068121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.068395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.068419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.068723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.068751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.069023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.069052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.069451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.069504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.069802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.069844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.070113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.070145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.070413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.070441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.070811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.070836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.071110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.071135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.071422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.071451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.071705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.071733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.072001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.072030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.072264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.072289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.072576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.072603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.072875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.072904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.073174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.073201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.073498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.073523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.073854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.073883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.074121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.074149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.074651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.074699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.074981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.075006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.075325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.075364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.075638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.075666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.075959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.075987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.076274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.076299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.076578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.076605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.076848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.076878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.077170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.077199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.077460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.077485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.077750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.077779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.078106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.078136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.078451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.078498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.980 qpair failed and we were unable to recover it. 00:33:51.980 [2024-07-20 19:04:02.078782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.980 [2024-07-20 19:04:02.078814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.079120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.079148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.079424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.079457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.079938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.079987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.080277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.080316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.080626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.080655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.080917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.080946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.081448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.081497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.081807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.081833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.082105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.082133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.082425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.082453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.082777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.082842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.083192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.083230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.083536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.083566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.083818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.083855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.084123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.084151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.084445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.084471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.084816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.084842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.085137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.085165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.085545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.085598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.085896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.085922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.086202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.086230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.086525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.086553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.086834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.086860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.087140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.087181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.087465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.087493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.087755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.087783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.088066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.088092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.088360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.088386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.088656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.088684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.088955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.088981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.089265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.089317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.089619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.089644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.089939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.089964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.090242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.090270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.090696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.090747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.090984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.091009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.091289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.091317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.091560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.091588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.091853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.091879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.092235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.092305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.092578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.092610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.092865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.092896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.093194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.093223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.093510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.093535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.093809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.093837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.094098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.094126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.094485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.094510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.094782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.094828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.095131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.095159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.095428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.095456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.095902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.095931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.096299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.096342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.096626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.096657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.096929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.096959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.097218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.097247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.097512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.097539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.097823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.097852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.098114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.098142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.098370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.098394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.098635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.981 [2024-07-20 19:04:02.098660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.981 qpair failed and we were unable to recover it. 00:33:51.981 [2024-07-20 19:04:02.098954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.098980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.099224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.099253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.099551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.099578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.099876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.099902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.100213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.100238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.100516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.100544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.100842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.100883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.101169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.101210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.101523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.101551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.101822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.101863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.102097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.102127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.102381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.102407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.102752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.102780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.103080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.103108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.103426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.103454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.103720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.103745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.103974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.104017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.104287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.104315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.104812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.104857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.105118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.105143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.105427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.105455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.105749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.105784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.106062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.106090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.106355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.106381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.106626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.106656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.106947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.106976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.107354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.107402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.107666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.107691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.108027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.108056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.108331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.108358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.108671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.108695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.108945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.108971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.109271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.109299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.109570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.109598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.109889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.109918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.110207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.110246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.110559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.110587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.110851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.110881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.111147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.111171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.111397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.111421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.111703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.111731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.112033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.112062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.112402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.112430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.112749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.112774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.113080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.113105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.113457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.113481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.113873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.113901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.114130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.114154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.114470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.114502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.114777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.114813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.115080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.115108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.115366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.115390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.115623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.115651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.115932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.115960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.116416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.116465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.116748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.116774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.117133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.117177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.117463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.117493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.117916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.117945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.118255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.118296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.118604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.118632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.118909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.118938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.119190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.119219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.119496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.119520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.119811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.119840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.120098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.120126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.120619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.120668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.982 qpair failed and we were unable to recover it. 00:33:51.982 [2024-07-20 19:04:02.120956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.982 [2024-07-20 19:04:02.120982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.121271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.121299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.121558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.121586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.121844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.121875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.122203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.122228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.122505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.122533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.122801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.122830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.123099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.123128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.123417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.123442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.123749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.123778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.124081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.124109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.124417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.124450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.124740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.124765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.125047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.125075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.125307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.125335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.125595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.125623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.125906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.125947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.126212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.126241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.126498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.126526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.126790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.126826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.127095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.127120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.127378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.127408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.127682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.127715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.128000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.128029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.128307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.128332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.128607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.128635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.128908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.128938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.129182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.129210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.129562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.129586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.129868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.129897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.130163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.130191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.130512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.130558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.130810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.130835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.131123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.131151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.131416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.131445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.131921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.131950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.132239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.132265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.132558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.132586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.132883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.132912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.133203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.133230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.133521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.133562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.133836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.133865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.134132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.134161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.134571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.134625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.134861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.134887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.135132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.135159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.135386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.135413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.135670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.135698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.135969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.135995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.136310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.136337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.136603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.136631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.136925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.136953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.137220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.137244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.137532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.137560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.137849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.137878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.138145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.138173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.138436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.138461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.138729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.138754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.138999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.139025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.139406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.139465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.139893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.139919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.140184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.140212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.140500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.140525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.140806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.140833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.141076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.141102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.141353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.141381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.141683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.141710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.141958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.141988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.142269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.983 [2024-07-20 19:04:02.142294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.983 qpair failed and we were unable to recover it. 00:33:51.983 [2024-07-20 19:04:02.142548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.142576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.142823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.142852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.143154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.143182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.143465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.143490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.143774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.143808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.144069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.144098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.144540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.144588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.144888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.144914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.145202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.145230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.145477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.145505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.145922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.145951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.146212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.146237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.146492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.146520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.146787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.146821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.147116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.147144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.147414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.147438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.147760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.147788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.148095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.148122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.148491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.148550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.148811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.148837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.149082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.149110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.149378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.149412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.149894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.149923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.150157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.150182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.150486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.150514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.150776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.150813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.151083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.151111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.151376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.151401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.151684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.151711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.151976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.152004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.152317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.152344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.152614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.152639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.152936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.152964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.153206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.153234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.153598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.153626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.153892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.153919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.154122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.154147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.154429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.154457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.154885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.154915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.155188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.155212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.155449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.155476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.155735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.155763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.156033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.156059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.156313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.156338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.156589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.156616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.156874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.156903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.157170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.157198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.157439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.157464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.157745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.157773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.158038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.158066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.158560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.158608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.158871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.158898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.159326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.159398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.159679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.159710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.159979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.160009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.160306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.160344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.160647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.160675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.160962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.160990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.161477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.161527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.984 qpair failed and we were unable to recover it. 00:33:51.984 [2024-07-20 19:04:02.161834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.984 [2024-07-20 19:04:02.161876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.162142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.162170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.162433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.162461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.162715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.162749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.163032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.163057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.163319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.163346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.163598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.163626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.163885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.163911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.164157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.164182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.164432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.164460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.164699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.164727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.165016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.165042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.165276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.165300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.165609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.165636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.165908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.165937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.166213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.166241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.166474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.166500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.166790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.166826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.167069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.167097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.167355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.167380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.167663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.167702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.167993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.168022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.168289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.168316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.168850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.168879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.169133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.169157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.169534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.169591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.169864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.169893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.170162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.170190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.170435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.170459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.170841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.170869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.171116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.171152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.171417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.171446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.171708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.171733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.172006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.172035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.172302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.172330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.172850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.172879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.173147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.173173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.173445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.173473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.173703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.173731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.174017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.174060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.174303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.174328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.174613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.174640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.174931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.174970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.175274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.175323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.175591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.175618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.175903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.175932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.176230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.176270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.176581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.176650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.176915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.176940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.177206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.177233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.177522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.177550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.177804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.177833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.178128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.178153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.178430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.178457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.178703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.178731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.178969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.178998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.179285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.179310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.179587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.179614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.179893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.179922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.180188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.180215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.180605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.180654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.180944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.180972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.181237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.181267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.181676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.985 [2024-07-20 19:04:02.181736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.985 qpair failed and we were unable to recover it. 00:33:51.985 [2024-07-20 19:04:02.182000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.182026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.182268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.182295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.182592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.182620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.182882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.182909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.183169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.183194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.183446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.183476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.183769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.183805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.184101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.184131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.184368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.184394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.184694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.184721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.184996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.185025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.185498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.185549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.185811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.185836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.186108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.186137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.186410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.186439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.186680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.186768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.187044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.187070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.187350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.187378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.187635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.187663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.187930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.187956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.188183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.188209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.188519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.188546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.188822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.188851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.189104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.189131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.189383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.189408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.189680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.189704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.189966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.189995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.190368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.190414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.190735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.190759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.191024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.191052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.191294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.191322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.191839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.191867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.192291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.192334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.192612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.192643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.192931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.192969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.193494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.193544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.193832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.193858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.194139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.194167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.194438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.194466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.194759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.194789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.195070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.195096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.195425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.195450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.195682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.195723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.195982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.196013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.196302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.196326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.196603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.196632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.196906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.196935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.197202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.197229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.197498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.197524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.197829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.197859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.198148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.198176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.198712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.198761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.199030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.199057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.199399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.199428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.199714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.199743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.200025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.200054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.200309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.200334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.200659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.200687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.200957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.200986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.201258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.201286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.201550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.201575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.201928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.201957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.202235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.202263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.202727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.202774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.203055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.203081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.203421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.203448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.203715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.203742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.204014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.204043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.204313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.986 [2024-07-20 19:04:02.204338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.986 qpair failed and we were unable to recover it. 00:33:51.986 [2024-07-20 19:04:02.204553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.204578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.204798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.204824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.205108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.205136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.205423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.205448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.205752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.205780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.206067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.206111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.206653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.206705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.206960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.206986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.207261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.207290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.207525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.207554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.207801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.207830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.208090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.208115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.208406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.208434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.208702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.208731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.209003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.209032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.209314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.209339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.209630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.209658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.209901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.209929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.210480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.210529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.210811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.210837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.211115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.211143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.211418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.211446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.211863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.211892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.212153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.212178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.212455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.212482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.212751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.212780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.213031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.213059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.213343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.213387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.213689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.213714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.214210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.214254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.214840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.214871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.215156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.215182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.215438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.215466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.215703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.215737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.216034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.216063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.216325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.216350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.216599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.216626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.216892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.216918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.217200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.217228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.217493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.217518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.217768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.217805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.218097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.218138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.218464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.218489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.218688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.218712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.218983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.219013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.219276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.219301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.219536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.219580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.219940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.219969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.220212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.220240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.220528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.220555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.220805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.220833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.221114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.221155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.221419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.221447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.221711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.221739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.222032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.222061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.222341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.222365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.222626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.222654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.222897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.222926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.223358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.223408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.223699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.223722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.224039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.224064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.224332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.224358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.224876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.987 [2024-07-20 19:04:02.224905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.987 qpair failed and we were unable to recover it. 00:33:51.987 [2024-07-20 19:04:02.225162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.225187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.225439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.225464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.225740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.225768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.226085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.226111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.226378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.226403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.226733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.226761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.227037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.227063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.227578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.227628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.227912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.227953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.228242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.228270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.228560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.228587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.228876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.228909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.229185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.229210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.229519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.229547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.229839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.229868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.230139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.230167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.230438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.230463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.230733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.230761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.231060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.231086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.231407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.231474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.231763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.231788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.232091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.232120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.232353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.232383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.232852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.232881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.233146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.233171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.233467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.233495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.233731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.233759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.234032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.234062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.234398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.234440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.234705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.234735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.235013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.235041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.235536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.235586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.235822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.235847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.236220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.236258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.236544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.236571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.236851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.236881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.237173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.237199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.237503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.237531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.237824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.237853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.238104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.238133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.238381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.238406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.238689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.238719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.238993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.239023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.239396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.239420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.239649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.239675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.239968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.239997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.240268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.240296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.240817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.240878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.241166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.241191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.241435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.241461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.241758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.241786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.242073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.242101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.242328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.242353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.242602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.242627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.242925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.242955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.243224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.243252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.243568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.243592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.243914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.243939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.244254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.244284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.244829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.244882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.245119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.245159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.245469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.245497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.245783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.245827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.246068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.246096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.246443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.246467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.246768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.988 [2024-07-20 19:04:02.246806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.988 qpair failed and we were unable to recover it. 00:33:51.988 [2024-07-20 19:04:02.247101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.247125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.247449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.247478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.247787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.247819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.248099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.248127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.248393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.248420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.248810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.248856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.249104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.249144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.249414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.249442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.249731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.249758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.250035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.250064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.250354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.250379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.250682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.250710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.250977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.251006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.251268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.251301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.251600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.251640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.251905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.251934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.252200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.252229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.252595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.252623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.252909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.252935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.253192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.253217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.253440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.253466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.253705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.253730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.253972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.253999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.254278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.254308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.254572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.254600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.254824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.254853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.255112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.255137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.255417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.255446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.255737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.255764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.256067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.256096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.256381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.256407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.256676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.256705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.256992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.257021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.257509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.257560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.257818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.257843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.258082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.258108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.258397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.258425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.258734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.258762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.259052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.259078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.259349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.259375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.259646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.259674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.259941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.259970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.260234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.260260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.260535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.260563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.260830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.260858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.261147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.261175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.261442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.261467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.261741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.261769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.262042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.262067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.262285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.262326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.262587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.262613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.262883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.262909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.263145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.263172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.263648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.263699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.263960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.263990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.264236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.264264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.264560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.264588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.264862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.264891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.265138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.265162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.265470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.265498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.265728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.265758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.266064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.266093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.989 [2024-07-20 19:04:02.266383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.989 [2024-07-20 19:04:02.266408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.989 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.266690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.266718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.267017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.267046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.267386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.267411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.267681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.267706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.267954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.267983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.268255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.268283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.268762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.268816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.269102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.269128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.269384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.269413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.269652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.269680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.269970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.269999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.270281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.270305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.270606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.270634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.270901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.270930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.271196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.271224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.271495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.271522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.271811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.271841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.272131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.272159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.272667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.272718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.273007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.273033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.273307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.273336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.273605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.273633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.273905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.273934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.274195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.274221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.274504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.274531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.274799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.274828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.275114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.275143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.275413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.275438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.275708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.275736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.275977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.276003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.276490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.276538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.276814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.276855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.277356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.277399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.277684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.277715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.277988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.278018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.278289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.278315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.278595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.278623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.278886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.278916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.279164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.279192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.279453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.279480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.279773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.279810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.280085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.280114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.280459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.280523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.280809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.280836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.281092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.281120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.281358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.281388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.281896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.281926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.282183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.282208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.282512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.282540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.282812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.282841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.283105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.283135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.283404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.283443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.283714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.283744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.284021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.284049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.284322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.284350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.284617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.284642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.284914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.284947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.285229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.285267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.285529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.285557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.285846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.285878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.286102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.286143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.286429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.286457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.286734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.286764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:51.990 [2024-07-20 19:04:02.287039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.990 [2024-07-20 19:04:02.287068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:51.990 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.287350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.287380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.287657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.287686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.287957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.287986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.288244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.288269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.288515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.288543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.288785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.288823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.289084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.289110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.289325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.289350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.289610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.289687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.289958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.289988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.290231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.290260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.290499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.290524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.290826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.290855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.291126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.291151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.291440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.291468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.291765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.291790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.292041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.292070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.292361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.292386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.292878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.292907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.293181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.293220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.293478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.293506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.293808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.293835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.294113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1543316 Killed "${NVMF_APP[@]}" "$@" 00:33:52.263 [2024-07-20 19:04:02.294148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.294396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.294422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 [2024-07-20 19:04:02.294697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.294726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:52.263 [2024-07-20 19:04:02.294970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.295001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:52.263 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:52.263 [2024-07-20 19:04:02.295518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.263 [2024-07-20 19:04:02.295571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.263 qpair failed and we were unable to recover it. 00:33:52.263 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:52.264 [2024-07-20 19:04:02.295841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.295868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.264 [2024-07-20 19:04:02.296118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.296147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.296414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.296442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.296809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.296848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.297113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.297139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.297417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.297445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.297710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.297743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.298047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.298077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.298344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.298370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.298646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.298675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.298918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.298948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.299462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.299510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.299784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.299818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.300137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.300166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.300407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.300435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.300906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.300935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1543860 00:33:52.264 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:52.264 [2024-07-20 19:04:02.301168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.301194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1543860 00:33:52.264 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1543860 ']' 00:33:52.264 [2024-07-20 19:04:02.301449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.301479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.264 [2024-07-20 19:04:02.301743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.301772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:52.264 addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.264 [2024-07-20 19:04:02.302083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:52.264 [2024-07-20 19:04:02.302113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.264 [2024-07-20 19:04:02.302369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.302396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.302671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.302696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.302976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.303006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.303368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.303397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.303670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.303707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.303980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.304011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.304280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.304309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.304645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.304694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.304978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.305004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.305264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.305292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.305556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.305584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.305851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.305883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.306166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.306191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.306435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.306465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.306728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.264 [2024-07-20 19:04:02.306757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.264 qpair failed and we were unable to recover it. 00:33:52.264 [2024-07-20 19:04:02.307034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.307064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.307336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.307360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.307647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.307675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.307924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.307956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.308453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.308504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.308782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.308829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.309117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.309146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.309416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.309444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.309897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.309926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.310209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.310235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.310674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.310726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.310998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.311027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.311517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.311566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.311827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.311854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.312133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.312161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.312449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.312478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.312764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.312790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.313047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.313073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.313354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.313382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.313636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.313661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.313896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.313923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.314204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.314234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.314519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.314547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.314838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.314868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.315127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.315155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.315450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.315478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.315765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.315802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.316032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.316069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.316328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.316359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.316616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.316662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.316936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.316963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.317205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.317232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.317780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.317839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.318121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.318147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.318449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.318478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.318719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.318748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.319019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.319048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.319346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.319371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.319683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.319711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.319973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.320002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.320449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.265 [2024-07-20 19:04:02.320498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.265 qpair failed and we were unable to recover it. 00:33:52.265 [2024-07-20 19:04:02.320851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.320881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.321170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.321198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.321463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.321491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.321917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.321946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.322205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.322231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.322525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.322550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.322832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.322861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.323106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.323133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.323425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.323450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.323759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.323787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.324059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.324087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.324438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.324462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.324737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.324762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.325034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.325060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.325328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.325356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.325837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.325866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.326154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.326179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.326459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.326486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.326730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.326758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.327031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.327061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.327344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.327370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.327652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.327685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.327959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.327988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.328324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.328353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.328752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.328780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.329029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.329070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.329369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.329397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.329676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.329701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.329986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.330012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.330300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.330328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.330599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.330626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.330871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.330900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.331172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.331196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.331481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.331510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.331773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.331817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.332097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.332125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.332383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.332408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.332680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.332709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.332987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.333016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.333481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.333534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.333895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.266 [2024-07-20 19:04:02.333925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.266 qpair failed and we were unable to recover it. 00:33:52.266 [2024-07-20 19:04:02.334194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.334222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.334520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.334549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.334821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.334850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.335118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.335144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.335447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.335475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.335737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.335765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.336055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.336093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.336321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.336349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.336609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.336637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.336938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.336964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.337230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.337258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.337521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.337546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.337815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.337844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.338119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.338147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.338622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.338669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.338953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.338979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.339252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.339280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.340303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.340344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.340639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.340669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.340993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.341020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.341282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.341310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.341548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.341577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.342350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.342383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.342708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.342751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.343045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.343074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.343346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.343374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.343768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.343830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.344072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.344101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.344339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.344364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.344574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.344599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.344831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.344858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.345107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.345132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.345404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.345432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.345720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.267 [2024-07-20 19:04:02.345749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.267 qpair failed and we were unable to recover it. 00:33:52.267 [2024-07-20 19:04:02.346019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.346048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.346345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.346370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.346629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.346659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.346927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.346956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.347255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.347301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.347582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.347607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.347888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.347917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.348182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.348210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.348505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.348554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.348818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.348845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.349047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.349073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.349230] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:52.268 [2024-07-20 19:04:02.349303] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:52.268 [2024-07-20 19:04:02.349382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.349410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.349718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.349766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.350019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.350046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.350301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.350329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.350594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.350621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.350889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.350918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.351203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.351229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.351507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.351535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.351770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.351806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.352049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.352077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.352361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.352386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.352643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.352671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.352927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.352954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.353237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.353265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.353499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.353527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.353775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.353817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.354112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.354138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.354419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.354466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.354751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.354777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.355080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.355109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.355377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.355405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.355684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.355735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.356039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.356066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.356345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.356375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.356612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.356641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.356969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.356999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.357308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.357348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.357581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.357609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.268 qpair failed and we were unable to recover it. 00:33:52.268 [2024-07-20 19:04:02.357877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.268 [2024-07-20 19:04:02.357903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.358164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.358197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.358452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.358478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.358774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.358815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.359107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.359135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.359467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.359516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.359786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.359817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.360098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.360126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.360390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.360419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.360767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.360806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.361090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.361116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.361373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.361403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.361667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.361695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.361938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.361968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.362232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.362257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.362544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.362572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.362856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.362886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.363154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.363182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.363431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.363457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.363724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.363752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.364028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.364054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.364308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.364352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.364614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.364639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.364929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.364958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.365231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.365259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.365552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.365596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.365881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.365908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.366184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.366210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.366485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.366513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.366788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.366819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.367056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.367082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.367326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.367352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.367655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.367683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.367980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.368009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.368283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.368308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.368579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.368608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.368877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.368903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.369122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.369148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.369359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.369385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.369619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.369662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.369897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.269 [2024-07-20 19:04:02.369925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.269 qpair failed and we were unable to recover it. 00:33:52.269 [2024-07-20 19:04:02.370205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.370232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.370495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.370521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.370766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.370801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.371035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.371062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.371314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.371341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.371600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.371626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.371842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.371884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.372137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.372164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.372419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.372448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.372680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.372706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.372960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.372987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.373230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.373257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.373507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.373533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.373755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.373781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.374033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.374060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.374329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.374355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.374626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.374653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.374888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.374915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.375166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.375192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.375451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.375477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.375704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.375730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.375962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.375988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.376219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.376260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.376525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.376551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.376768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.376799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.377026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.377053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.377294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.377321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.377567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.377593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.377810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.377839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.378102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.378127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.378420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.378445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.378688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.378713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.378959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.378986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.379254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.379279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.379543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.379568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.379830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.379856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.380093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.380119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.380378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.380403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.380623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.380649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.380918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.380944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.381190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.381215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.270 qpair failed and we were unable to recover it. 00:33:52.270 [2024-07-20 19:04:02.381422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.270 [2024-07-20 19:04:02.381447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.381676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.381701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.381969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.381995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.382240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.382265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.382473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.382499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.382721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.382747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.382982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.383008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.383245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.383270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.383524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.383549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.383814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.383840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.384126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.384154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.384643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.384692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.271 [2024-07-20 19:04:02.384954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.384980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.385257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.385285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.385546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.385575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.385812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.385838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.386075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.386100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.386368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.386396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.386661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.386689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.386962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.386988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.387207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.387232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.387500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.387528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.387799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.387842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.388067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.388092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.388334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.388361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.388628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.388653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.388897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.388923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.389144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.389169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.389442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.389467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.389732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.389757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.390027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.390053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.390290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.390315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.390592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.390617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.390856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.390882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.391122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.391147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.391419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.271 [2024-07-20 19:04:02.391444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.271 qpair failed and we were unable to recover it. 00:33:52.271 [2024-07-20 19:04:02.391683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.391708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.391950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.391976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.392224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.392249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.392509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.392534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.392746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.392772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.393022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.393048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.393301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.393326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.393560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.393585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.393823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.393849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.394069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.394094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.394358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.394383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.394628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.394653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.394928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.394954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.395171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.395198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.395448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.395474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.395697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.395722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.395971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.395997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.396218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.396245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.396483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.396508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.396722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.396751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.397018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.397044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.397281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.397305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.397516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.397541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.397758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.397783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.398027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.398054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.398321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.398346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.398614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.398640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.398903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.398930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.399190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.399216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.399448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.399474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.399749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.399775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.400031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.400057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.400276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.400303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.400546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.400573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.400830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.400856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.401087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.401112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.401318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.401343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.401554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.401579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.401821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.401847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.402086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.272 [2024-07-20 19:04:02.402111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.272 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-20 19:04:02.402376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.402401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.402634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.402659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.402928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.402954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.403192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.403218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.403484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.403509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.403729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.403754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.404009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.404042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.404270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.404296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.404540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.404566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.404834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.404860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.405089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.405115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.405375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.405400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.405632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.405658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.405927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.405953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.406195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.406221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.406437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.406462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.406669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.406694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.406960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.406986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.407204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.407231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.407487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.407512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.407780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.407812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.408091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.408116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.408361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.408387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.408674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.408699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.408951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.408977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.409195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.409221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.409455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.409480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.409719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.409744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.409967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.409993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.410215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.410242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.410454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.410481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.410747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.410774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.411048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.411074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.411315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.411341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.411564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.411591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.411826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.411853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.412101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.412126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.412377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.412403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.412672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.412697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.412947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.412973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-20 19:04:02.413181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.273 [2024-07-20 19:04:02.413206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.413446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.413472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.413712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.413738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.413967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.413994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.414228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.414254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.414480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.414505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.414717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.414742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.414985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.415016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.415260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.415285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.415499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.415525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.415765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.415790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.416065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.416091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.416308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.416333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.416570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.416595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.416814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.416841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.417073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.417098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.417333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.417358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.417591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.417617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.417832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.417859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.418124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.418149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.418363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.418389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.418661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.418686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.418929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.418956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.419195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.419223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.419436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.419462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.419674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.419699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.419911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.419937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.420157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.420183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.420384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.420409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.420622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.420647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.420682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:52.274 [2024-07-20 19:04:02.420885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.420913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.421160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.421186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.421399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.421425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.421661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.421686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.421964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.421991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.422233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.422259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.422523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.422549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.422780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.422813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.423037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.423063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.423328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.423353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.423578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.423603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-20 19:04:02.423814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.274 [2024-07-20 19:04:02.423840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.424078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.424104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.424348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.424375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.424605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.424630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.425045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.425071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.425312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.425338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.425560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.425585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.425803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.425829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.426042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.426069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.426329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.426354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.426586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.426611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.426882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.426909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.427154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.427180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.427443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.427469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.427915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.427942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.428179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.428204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.428467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.428493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.428736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.428761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.429010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.429036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.429256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.429282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.429525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.429555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.429778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.429811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.430080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.430106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.430323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.430349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.430589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.430615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.430832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.430858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.431290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.431331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.431589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.431618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.431894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.431921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.432137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.432162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.432431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.432457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.432722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.432748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.433168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.433209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.433476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.433504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.433784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.433820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.434047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.434073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.434304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.434329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.434593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.434619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.434867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.434895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.435142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.435168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.435472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.435498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.275 qpair failed and we were unable to recover it. 00:33:52.275 [2024-07-20 19:04:02.435737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-07-20 19:04:02.435763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.436012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.436038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.436262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.436288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.436508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.436535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.436762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.436788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.437020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.437047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.437261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.437289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.437509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.437537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.437781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.437815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.438038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.438065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.438305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.438331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.438597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.438623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.438863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.438892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.439112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.439138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.439369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.439394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.439618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.439644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.439886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.439914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.440133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.440162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.440425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.440452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.440732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.440758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.441004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.441037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.441278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.441304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.441544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.441570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.441780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.441814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.442057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.442083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.442350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.442376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.442615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.442641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.442910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.442937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.443181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.443209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.443426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.443452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.443697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.443723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.443975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.444002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.444233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.444259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.444487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.444512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.444757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.444784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.445024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-07-20 19:04:02.445051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.276 qpair failed and we were unable to recover it. 00:33:52.276 [2024-07-20 19:04:02.445260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.445286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.445524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.445550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.445765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.445790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.446082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.446108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.446375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.446402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.446669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.446695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.446967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.446994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.447213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.447239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.447460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.447487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.447730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.447756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.448001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.448027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.448291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.448321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.448597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.448623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.448862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.448889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.449134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.449160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.449391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.449417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.449690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.449715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.449979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.450006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.450225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.450251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.450487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.450512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.450757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.450783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.451062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.451089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.451332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.451357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.451596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.451621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.451881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.451907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.452150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.452178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.452457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.452483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.452703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.452728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.452967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.452994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.453262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.453289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.453503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.453528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.453816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.453843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.454111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.454137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.454384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.454410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.454623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.454648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.454907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.454934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.455174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.455200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.455453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.455479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.455724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-07-20 19:04:02.455750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.277 qpair failed and we were unable to recover it. 00:33:52.277 [2024-07-20 19:04:02.456007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.456033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.456263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.456289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.456530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.456557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.456799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.456826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.457069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.457102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.457341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.457367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.457606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.457632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.457850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.457877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.458089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.458114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.458331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.458362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.458583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.458610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.458854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.458880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.459099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.459125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.459359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.459393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.459613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.459638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.459885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.459911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.460152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.460177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.460445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.460471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.460690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.460714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.460981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.461007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.461254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.461279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.461492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.461518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.461738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.461764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.461980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.462007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.462241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.462268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.462542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.462568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.462817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.462843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.463088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.463115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.463388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.463413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.463614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.463640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.463912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.463938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.464185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.464212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.464471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.464498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.464764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.464790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.465043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.465069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.465320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.465346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.465598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.465623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.465865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.465892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.466142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.466169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.466406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.466432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.466704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.278 [2024-07-20 19:04:02.466734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.278 qpair failed and we were unable to recover it. 00:33:52.278 [2024-07-20 19:04:02.466948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.466975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.467207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.467233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.467480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.467506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.467726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.467751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.467978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.468005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.468243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.468270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.468517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.468542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.468765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.468791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.469035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.469061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.469300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.469325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.469544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.469571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.469815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.469842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.470086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.470114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.470365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.470391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.470645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.470671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.470886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.470912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.471133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.471160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.471607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.471633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.471881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.471907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.472153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.472178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.472419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.472444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.472690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.472715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.472946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.472972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.473205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.473230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.473678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.473704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.473937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.473963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.474199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.474225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.474469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.474496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.474712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.474739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.474994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.475022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.475255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.475280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.475539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.475564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.475800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.475827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.476048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.476083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.476282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.476307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.476678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.476705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.476951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.476978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.477229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.477255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.477490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.477515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.477764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.477790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.478062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.478103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.279 [2024-07-20 19:04:02.478374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.279 [2024-07-20 19:04:02.478399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.279 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.478641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.478667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.478915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.478941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.479156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.479182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.479419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.479445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.479686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.479712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.479982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.480008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.480233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.480259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.480504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.480529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.480769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.480812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.481033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.481059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.481322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.481347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.481596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.481622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.481848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.481875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.482093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.482120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.482367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.482394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.482635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.482661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.482907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.482934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.483163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.483199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.483436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.483463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.483704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.483730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.483954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.483982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.484252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.484278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.484487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.484513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.484781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.484816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.485040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.485066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.485293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.485319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.485561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.485587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.485883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.485909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.486151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.486176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.486404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.486430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.486649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.486674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.486889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.486915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.487134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.487159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.487369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.487394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.487608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.487636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.280 [2024-07-20 19:04:02.487873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.280 [2024-07-20 19:04:02.487900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.280 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.488180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.488207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.488443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.488468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.488710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.488735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.488970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.488996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.489244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.489269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.489515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.489540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.489748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.489776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.490009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.490035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.490272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.490297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.490545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.490570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.490808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.490835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.491053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.491082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.491329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.491354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.491573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.491598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.491843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.491869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.492106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.492131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.492344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.492370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.492611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.492637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.492882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.492908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.493127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.493152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.493392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.493417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.493672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.493697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.493924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.493951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.494190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.494215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.494468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.494494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.494740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.494766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.495016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.495042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.495254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.495280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.495521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.495546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.495789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.495821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.496061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.496097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.496347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.496373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.496612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.496637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.496853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.496880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.497100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.497126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.497364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.497391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.497655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.497681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.497901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.497928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.498174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.498200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.498464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.498490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.498707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.281 [2024-07-20 19:04:02.498732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.281 qpair failed and we were unable to recover it. 00:33:52.281 [2024-07-20 19:04:02.498954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.498980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.499211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.499237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.499454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.499479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.499694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.499719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.499930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.499957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.500193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.500220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.500515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.500540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.500808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.500835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.501045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.501071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.501304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.501330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.501575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.501601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.501840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.501867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.502111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.502137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.502376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.502402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.502659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.502686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.502918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.502945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.503165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.503191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.503431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.503457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.503714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.503740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.503950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.503977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.504194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.504219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.504459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.504486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.504755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.504781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.505056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.505082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.505321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.505346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.505584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.505610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.505881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.505908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.506172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.506198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.506444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.506469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.506726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.506752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.507019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.507046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.507301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.507326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.507561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.507587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.507827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.507854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.508094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.508120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.508359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.508384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.508621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.508646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.508891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.508918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.509144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.509170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.509414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.509439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.509705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.509731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.282 qpair failed and we were unable to recover it. 00:33:52.282 [2024-07-20 19:04:02.509987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.282 [2024-07-20 19:04:02.510013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.510247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.510273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.510491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.510518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.510759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.510787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.511021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.511046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.511257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.511282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.511527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.511553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.511766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.511791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.512019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.512044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.512224] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:52.283 [2024-07-20 19:04:02.512259] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:52.283 [2024-07-20 19:04:02.512275] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:52.283 [2024-07-20 19:04:02.512281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.512287] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:52.283 [2024-07-20 19:04:02.512299] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:52.283 [2024-07-20 19:04:02.512306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.512355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:52.283 [2024-07-20 19:04:02.512508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.512532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.512570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:52.283 [2024-07-20 19:04:02.512622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:52.283 [2024-07-20 19:04:02.512620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:52.283 [2024-07-20 19:04:02.512767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.512804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.513023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.513049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.513290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.513317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.513529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.513555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.513764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.513805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.514031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.514056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.514263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.514289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.514493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.514518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.514753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.514778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.515011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.515037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.515255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.515280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.515526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.515552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.515804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.515830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.516048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.516074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.516316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.516342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.516564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.516589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.516814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.516841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.517108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.517134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.517509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.517534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.517752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.517777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.518017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.518043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.518287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.518313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.518551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.518576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.518815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.518842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.519072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.519101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.519342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.519368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.519602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.519627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.283 qpair failed and we were unable to recover it. 00:33:52.283 [2024-07-20 19:04:02.519847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.283 [2024-07-20 19:04:02.519873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.520095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.520121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.520359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.520389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.520604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.520629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.520871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.520897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.521120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.521146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.521384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.521409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.521658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.521683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.521921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.521947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.522171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.522196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.522407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.522432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.522674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.522700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.522949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.522975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.523220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.523245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.523486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.523512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.523723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.523748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.523977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.524004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.524256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.524282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.524525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.524550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.524786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.524820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.525062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.525089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.525305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.525330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.525535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.525560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.525772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.525824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.526036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.526062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.526281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.526308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.526523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.526550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.526758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.526784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.527002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.527028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.527452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.527476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.527735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.527760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.527989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.528016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.528233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.528259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.528474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.528499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.528715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.528740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.528951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.528978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.529213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.284 [2024-07-20 19:04:02.529238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.284 qpair failed and we were unable to recover it. 00:33:52.284 [2024-07-20 19:04:02.529452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.529477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.529684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.529709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.529915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.529941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.530155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.530180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.530446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.530471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.530690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.530715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.530969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.530995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.531197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.531222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.531460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.531485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.531720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.531745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.531962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.531988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.532227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.532251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.532495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.532520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.532760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.532786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.533010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.533036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.533250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.533275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.533508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.533533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.533743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.533768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.533990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.534017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.534221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.534246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.534489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.534514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.534714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.534739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.534958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.534986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.535199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.535225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.535435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.535460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.535668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.535693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.535902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.535928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.536168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.536193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.536401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.536426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.536663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.536690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.536930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.536957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.537175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.537202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.537445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.537470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.537699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.537728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.537948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.537973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.538187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.538214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.538427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.538452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.538692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.538719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.538935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.538961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.539197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.539223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.539432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.539457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.285 qpair failed and we were unable to recover it. 00:33:52.285 [2024-07-20 19:04:02.539693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.285 [2024-07-20 19:04:02.539718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.539956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.539982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.540223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.540250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.540464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.540490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.540733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.540758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.540970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.540996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.541203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.541229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.541441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.541466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.541920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.541946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.542189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.542214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.542450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.542475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.542708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.542733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.542940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.542967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.543170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.543196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.543435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.543460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.543663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.543688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.543923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.543972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.544206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.544231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.544470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.544495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.544742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.544767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.544993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.545020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.545236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.545261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.545503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.545528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.545762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.545788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.546025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.546051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.546260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.546286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.546493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.546518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.546753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.546778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.547007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.547034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.547276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.547303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.547548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.547574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.547786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.547826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.548075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.548100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.548331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.548360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.548606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.548631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.548840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.548867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.549091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.549116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.549356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.549381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.549588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.549613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.549827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.549853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.550118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.550143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.286 [2024-07-20 19:04:02.550379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.286 [2024-07-20 19:04:02.550404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.286 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.550628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.550653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.550899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.550924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.551134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.551159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.551371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.551396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.551600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.551625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.551840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.551866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.552075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.552100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.552313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.552340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.552561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.552587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.552805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.552831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.553042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.553067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.553283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.553308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.553538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.553563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.553808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.553834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.554043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.554068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.554276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.554302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.554501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.554526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.554771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.554802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.555029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.555058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.555300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.555325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.555535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.555562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.555761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.555786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.556028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.556054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.556292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.556317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.556586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.556611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.556828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.556854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.557102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.557127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.557355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.557380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.557607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.557632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.557886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.557912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.558119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.558144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.558380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.558405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.558624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.558649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.558897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.558923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.559140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.559167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.559402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.559427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.559669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.559694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.559901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.559927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.560165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.560191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.560395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.560422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.560637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-07-20 19:04:02.560663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.287 qpair failed and we were unable to recover it. 00:33:52.287 [2024-07-20 19:04:02.560897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.560924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.561167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.561194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.561416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.561441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.561682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.561709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.561925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.561951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.562197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.562222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.562464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.562489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.562695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.562722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.562935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.562960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.563187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.563214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.563421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.563446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.563825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.563851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.564120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.564146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.564386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.564411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.564617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.564642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.564880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.564907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.565126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.565151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.565361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.565387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.565608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.565642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.565871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.565897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.566107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.566133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.566334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.566359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.566580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.566606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.566838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.566864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.567103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.567129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.567370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.567395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.567652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.567678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.567919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.567946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.568198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.568236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.568499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.568536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.568811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.568839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.569062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.569088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.569336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.569361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.569568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.569594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.569829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.569855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.570101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.570126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.570339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.570373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.570648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.570685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.571187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.571227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.571482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.571510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.571720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.288 [2024-07-20 19:04:02.571745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.288 qpair failed and we were unable to recover it. 00:33:52.288 [2024-07-20 19:04:02.572012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.289 [2024-07-20 19:04:02.572039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.289 qpair failed and we were unable to recover it. 00:33:52.289 [2024-07-20 19:04:02.572308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.289 [2024-07-20 19:04:02.572335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.289 qpair failed and we were unable to recover it. 00:33:52.289 [2024-07-20 19:04:02.572575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.289 [2024-07-20 19:04:02.572604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.289 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.572813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.572840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.573071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.573102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.573359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.573384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.573624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.573649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.573861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.573887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.574110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.574136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.574341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.574367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.574576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.574602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.574820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.574846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.575059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.575085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.575353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.575378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.575600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.575625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.575837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.575863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.576071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.576096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.576328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.576354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.576577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.576603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.576873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.556 [2024-07-20 19:04:02.576899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.556 qpair failed and we were unable to recover it. 00:33:52.556 [2024-07-20 19:04:02.577139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.577165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.577373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.577398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.577645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.577670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.577883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.577910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.578161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.578186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.578393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.578418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.578654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.578679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.578907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.578933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.579182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.579207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.579469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.579494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.579709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.579734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.579977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.580003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.580231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.580256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.580464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.580489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.580702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.580728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.580978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.581005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.581238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.581263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.581470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.581495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.581726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.581752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.581968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.581995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.582212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.582237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.582476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.582501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.582730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.582756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.583243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.583298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.583560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.583589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.583820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.583853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.584075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.584103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.584319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.584347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.584581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.584607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.584823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.584850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.585117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.585142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.585352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.585377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.585617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.585643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.585888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.585914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.586120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.586145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.586355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.586380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.586594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.586619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.586858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.586884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.587102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.587127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.587363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.587389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.557 [2024-07-20 19:04:02.587613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.557 [2024-07-20 19:04:02.587638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.557 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.588152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.588205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.588468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.588495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.588719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.588744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.588983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.589010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.589252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.589278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.589742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.589781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.590036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.590061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.590336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.590361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.590609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.590634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.590890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.590916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.591138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.591163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.591388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.591419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.591663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.591688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.591929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.591955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.592409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.592447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.592693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.592718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.592950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.592977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.593187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.593212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.593459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.593484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.593728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.593754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.593978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.594007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.594258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.594286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.594508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.594534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.594768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.594800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.595021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.595047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.595250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.595276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.595494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.595520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.595765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.595790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.596029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.596055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.596290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.596315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.596577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.596602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.596812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.596838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.597082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.597108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.597328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.597353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.597575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.597600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.597838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.597864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.598109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.598134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.598372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.598397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.598644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.598669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.598888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.598914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.558 qpair failed and we were unable to recover it. 00:33:52.558 [2024-07-20 19:04:02.599187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.558 [2024-07-20 19:04:02.599212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.599419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.599444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.599678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.599703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.599944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.599971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.600195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.600220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.600459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.600485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.600693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.600719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.600966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.600991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.601257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.601282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.601521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.601548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.601763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.601788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.601993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.602020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.602241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.602270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.602507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.602532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.602768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.602799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.603012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.603038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.603277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.603303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.603517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.603542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.603746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.603771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.604032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.604058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.604268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.604294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.604510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.604535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.604805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.604831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.605063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.605088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.605355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.605380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.605600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.605624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.605841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.605867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.606068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.606093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.606293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.606318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.606584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.606609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.606848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.606873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.607083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.607109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.607348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.607374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.607585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.607610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.607823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.607849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.608085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.608110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.608345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.608370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.608630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.608656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.608895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.608920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.609136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.609161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.609401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.559 [2024-07-20 19:04:02.609426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.559 qpair failed and we were unable to recover it. 00:33:52.559 [2024-07-20 19:04:02.609670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.609695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.609914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.609940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.610155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.610181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.610418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.610443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.610677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.610702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.610949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.610975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.611178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.611203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.611414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.611439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.611675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.611701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.611951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.611977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.612215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.612240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.612470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.612495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.612737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.612763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.612984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.613010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.613247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.613272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.613506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.613531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.613756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.613785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.614009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.614035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.614282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.614308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.614520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.614555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.614828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.614857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.615074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.615099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.615344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.615370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.615582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.615607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.615843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.615869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.616110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.616134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.616393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.616418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.616653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.616678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.616944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.616969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.617170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.617195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.617427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.617453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.617695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.617720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.617960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.617985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.560 [2024-07-20 19:04:02.618222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.560 [2024-07-20 19:04:02.618247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.560 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.618487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.618514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.618753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.618778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.619005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.619031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.619267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.619292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.619527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.619552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.619784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.619832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.620046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.620072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.620316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.620341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.620562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.620587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.620833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.620860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.621102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.621129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.621366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.621392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.621602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.621629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.621897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.621923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.622163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.622189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.622408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.622435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.622643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.622669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.622911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.622937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.623148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.623173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.623393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.623418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.623674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.623700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.623908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.623934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.624139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.624165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.624373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.624398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.624641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.624666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.624930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.624957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.625169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.625194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.625407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.625432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.625633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.625658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.625862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.625888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.626107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.626134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.626353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.626378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.626635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.626660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.626932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.626958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.627225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.627251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.627492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.627517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.627756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.627782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.628009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.628034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.628246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.628271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.628489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.628514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.561 [2024-07-20 19:04:02.628711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.561 [2024-07-20 19:04:02.628736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.561 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.628952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.628979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.629199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.629224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.629438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.629463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.629731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.629756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.629972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.630000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.630215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.630240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.630448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.630476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.630682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.630708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.630930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.630956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.631170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.631196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.631424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.631449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.631646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.631672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.631919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.631945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.632206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.632231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.632477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.632502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.632731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.632757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.633007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.633033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.633255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.633280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.633492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.633517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.633728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.633753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.633963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.633989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.634227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.634252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.634518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.634543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.634780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.634821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.635042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.635067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.635307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.635332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.635573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.635598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.635839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.635866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.636077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.636102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.636335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.636360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.636637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.636661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.636879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.636906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.637121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.637152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.637399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.637424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.637658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.637683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.637898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.637923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.638132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.638158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.638392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.638417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.638617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.638642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.638848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.562 [2024-07-20 19:04:02.638874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.562 qpair failed and we were unable to recover it. 00:33:52.562 [2024-07-20 19:04:02.639086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.639112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.639348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.639374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.639645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.639670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.639910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.639936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.640139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.640164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.640385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.640410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.640627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.640653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.640889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.640915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.641133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.641158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.641449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.641475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.641710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.641736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.641959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.641985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.642198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.642224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.642462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.642487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.642700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.642725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.642960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.642987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.643197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.643223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.643437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.643464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.643669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.643694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.643918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.643946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.644162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.644188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.644385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.644410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.644627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.644653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.644858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.644885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.645118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.645143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.645347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.645374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.645613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.645640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.645881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.645907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.646141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.646166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.646383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.646410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.646654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.646679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.646924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.646950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.647166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.647191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.647395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.647424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.647636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.647661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.647894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.647921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.648167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.648193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.648404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.648431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 [2024-07-20 19:04:02.648644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.648674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:52.563 [2024-07-20 19:04:02.648892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.563 [2024-07-20 19:04:02.648941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.563 qpair failed and we were unable to recover it. 00:33:52.563 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:33:52.563 [2024-07-20 19:04:02.649182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.649209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.649418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.649446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.649693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.649719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:52.564 [2024-07-20 19:04:02.649934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.649961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.564 [2024-07-20 19:04:02.650208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.650234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.650445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.650470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.650684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.650710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.650917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.650944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.651154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.651180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.651427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.651453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.651697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.651722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.651981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.652008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.652248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.652275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.652514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.652540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.652771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.652804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.653023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.653050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.653292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.653317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.653529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.653555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.653774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.653813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.654031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.654062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.654262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.654287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.654505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.654530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.654766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.654791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.655027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.655054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.655261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.655287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.655531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.655557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.655801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.655838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.656052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.656077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.656344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.656369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.656579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.656604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.656859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.656888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.657102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.657128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.657409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.657435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.657669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.657696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.657965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.657991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.658230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.658256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.658493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.658519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.658753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.658778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.658999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.659025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.659232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.564 [2024-07-20 19:04:02.659258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.564 qpair failed and we were unable to recover it. 00:33:52.564 [2024-07-20 19:04:02.659471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.659499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.659746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.659771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.660014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.660041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.660278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.660304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.660541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.660567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.660773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.660808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.661028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.661055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.661295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.661321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.661524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.661550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.661778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.661813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.662039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.662074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.662311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.662337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.662576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.662601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.662851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.662877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.663118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.663144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.663395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.663421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.663666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.663691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.663933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.663959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.664198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.664224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.664463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.664494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.664738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.664763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.665006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.665032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.665247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.665273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.665519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.665544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.665751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.665776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.665994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.666020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.666240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.666266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.666503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.666529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.666805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.666832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.667049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.667076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.667318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.667344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.667583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.667609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.667853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.667879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.668091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.668117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.668350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.668375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.668608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.668633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.565 qpair failed and we were unable to recover it. 00:33:52.565 [2024-07-20 19:04:02.668844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.565 [2024-07-20 19:04:02.668871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.669109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.669134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.669369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.669394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.669631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.669656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.669899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.669926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.670162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.670188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.670434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.670460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.670666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.670691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.670943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.670970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.671171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.671197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.671428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.671459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.671704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.671730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.671938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.671964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.672182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.672207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.672437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.672463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.672678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.672704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.672939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.672965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.673209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.673235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.673514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.673539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.673751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.673776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.674050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.674076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.674282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.674307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.674520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.674545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.674775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.674808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:52.566 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:52.566 [2024-07-20 19:04:02.675085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.675112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.675329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.675355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.566 addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.675631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.566 [2024-07-20 19:04:02.675658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.675896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.675923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.676126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.676151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.676382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.676408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.676619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.676645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.676853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.676879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.677094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.677120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.677356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.677381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.677644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.677669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.677908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.677935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.678180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.678206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.678416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.678443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.678678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.678704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.678922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.566 [2024-07-20 19:04:02.678955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.566 qpair failed and we were unable to recover it. 00:33:52.566 [2024-07-20 19:04:02.679175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.679200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.679593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.679619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.679838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.679873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.680083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.680108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.680342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.680369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.680587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.680613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.680851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.680877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.681122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.681147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.681389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.681416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.681647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.681677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.681947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.681973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.682201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.682226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.682494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.682519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.682734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.682759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.682985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.683011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.683220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.683247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.683462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.683488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.683695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.683721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.683962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.683988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.684194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.684220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.684454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.684480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.684679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.684704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.684915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.684940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.685339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.685365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.685581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.685606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.685849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.685875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.686092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.686117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.686335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.686360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.686866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.686892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.687106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.687131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.687358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.687383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.687603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.687628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.687871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.687897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.688124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.688150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.688387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.688412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.688624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.688649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.688859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.688885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.689111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.689136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.689343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.689368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.689610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.689635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.690121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.690177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.567 qpair failed and we were unable to recover it. 00:33:52.567 [2024-07-20 19:04:02.690442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.567 [2024-07-20 19:04:02.690470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.690737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.690763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.691014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.691040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.691305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.691331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.691535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.691560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.691773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.691805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.692029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.692055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.692263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.692288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.692499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.692524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.692771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.692809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.693057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.693082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.693295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.693320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.693566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.693591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.693807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.693833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.694072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.694098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.694342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.694367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.694586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.694611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.694875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.694901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.695152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.695177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.695386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.695412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.695655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.695680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.695886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.695912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.696124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.696149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.696389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.696414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.696628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.696654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.696903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.696929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.697137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.697163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.697367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.697392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.697636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.697662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.698234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.698289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.698547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.698574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.698818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.698850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.699090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.699115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.699371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.699397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.699615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.699640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.699917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.699943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.700145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.700176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.700446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.700471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.700748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.700773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.701046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.701072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 [2024-07-20 19:04:02.701336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.568 [2024-07-20 19:04:02.701361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.568 qpair failed and we were unable to recover it. 00:33:52.568 Malloc0 00:33:52.568 [2024-07-20 19:04:02.701605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.701631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.701846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.701872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:52.569 [2024-07-20 19:04:02.702143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.702169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.569 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.569 [2024-07-20 19:04:02.702405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.702431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.702641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.702666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.702887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.702923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.703166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.703191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.703431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.703460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.703724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.703749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.703975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.704003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.704223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.704248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.704485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.704512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.704754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.704779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.705024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.705050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.705190] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:52.569 [2024-07-20 19:04:02.705286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.705312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.705532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.705557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.705769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.705803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.706066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.706092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.706330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.706356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.706583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.706609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.706848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.706874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.707111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.707136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.707377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.707402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.707642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.707667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.707945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.707971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.708193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.708218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.708462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.708487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.708701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.708726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.708939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.708966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.709179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.709205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.709441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.709466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.709682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.709707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.709924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.709950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.710184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.569 [2024-07-20 19:04:02.710210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.569 qpair failed and we were unable to recover it. 00:33:52.569 [2024-07-20 19:04:02.710460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.710490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.710740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.710765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.710997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.711023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.711266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.711291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.711493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.711519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.711724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.711749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.711960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.711987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.712202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.712228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.712437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.712462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.712704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.712729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.712939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.712966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.713180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.713205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.713407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.713432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:52.570 [2024-07-20 19:04:02.713635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.713660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.570 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.570 [2024-07-20 19:04:02.713915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.713940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.714162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.714187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.714391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.714415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.714619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.714644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.714858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.714885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.715107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.715134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.715343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.715369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.715610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.715636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.715913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.715939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.716183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.716208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.716440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.716466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.716697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.716724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.716946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.716973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.717187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.717212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.717414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.717440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.717712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.717737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.717989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.718016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.718269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.718294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.718551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.718576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.718844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.718871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.719092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.719117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.719355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.719380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.719612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.719637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.719887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.719913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.720182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.720207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.570 qpair failed and we were unable to recover it. 00:33:52.570 [2024-07-20 19:04:02.720448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.570 [2024-07-20 19:04:02.720478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.720684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.720709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.720947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.720973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.721193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.721218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.721442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.721468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.571 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:52.571 [2024-07-20 19:04:02.721681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.721707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.571 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.571 [2024-07-20 19:04:02.721918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.721945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.722164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.722189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.722390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.722416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.722665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.722690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.722911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.722937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.723181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.723207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.723475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.723505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.723752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.723777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.723989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.724014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.724248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.724274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.724479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.724505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.724717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.724742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.724958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.724984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.725196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.725221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.725431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.725458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.725675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.725701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.725936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.725962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.726179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.726205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.726418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.726443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.726672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.726697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.726924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.726950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.727160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.727187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.727421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.727447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.727655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.727679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.727938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.727964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.728169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.728194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.728406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.728431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.728685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.728710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.728921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.728947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.729190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.729215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 [2024-07-20 19:04:02.729431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.729458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.571 addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:52.571 [2024-07-20 19:04:02.729673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.571 [2024-07-20 19:04:02.729699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.571 qpair failed and we were unable to recover it. 00:33:52.571 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.571 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.571 [2024-07-20 19:04:02.729926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.572 [2024-07-20 19:04:02.729952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.730195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.572 [2024-07-20 19:04:02.730220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.730433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.572 [2024-07-20 19:04:02.730459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.730712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.572 [2024-07-20 19:04:02.730738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.731005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.572 [2024-07-20 19:04:02.731031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.731280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.572 [2024-07-20 19:04:02.731305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.731563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.572 [2024-07-20 19:04:02.731588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.731823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.572 [2024-07-20 19:04:02.731848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.732087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.572 [2024-07-20 19:04:02.732112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.732345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.572 [2024-07-20 19:04:02.732370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.732592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.572 [2024-07-20 19:04:02.732617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.732862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.572 [2024-07-20 19:04:02.732888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.733106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.572 [2024-07-20 19:04:02.733131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.733337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.572 [2024-07-20 19:04:02.733366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58840 with addr=10.0.0.2, port=4420 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.733566] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.572 [2024-07-20 19:04:02.736038] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.572 [2024-07-20 19:04:02.736286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.572 [2024-07-20 19:04:02.736313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.572 [2024-07-20 19:04:02.736329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.572 [2024-07-20 19:04:02.736343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.572 [2024-07-20 19:04:02.736379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.572 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:52.572 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.572 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:52.572 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.572 19:04:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1543338 00:33:52.572 [2024-07-20 19:04:02.745917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.572 [2024-07-20 19:04:02.746141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.572 [2024-07-20 19:04:02.746167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.572 [2024-07-20 19:04:02.746182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.572 [2024-07-20 19:04:02.746196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.572 [2024-07-20 19:04:02.746224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.755911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.572 [2024-07-20 19:04:02.756129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.572 [2024-07-20 19:04:02.756156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.572 [2024-07-20 19:04:02.756170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.572 [2024-07-20 19:04:02.756185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.572 [2024-07-20 19:04:02.756214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.765865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.572 [2024-07-20 19:04:02.766078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.572 [2024-07-20 19:04:02.766109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.572 [2024-07-20 19:04:02.766124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.572 [2024-07-20 19:04:02.766145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.572 [2024-07-20 19:04:02.766175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.775901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.572 [2024-07-20 19:04:02.776110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.572 [2024-07-20 19:04:02.776136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.572 [2024-07-20 19:04:02.776150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.572 [2024-07-20 19:04:02.776163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.572 [2024-07-20 19:04:02.776192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.785946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.572 [2024-07-20 19:04:02.786148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.572 [2024-07-20 19:04:02.786175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.572 [2024-07-20 19:04:02.786189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.572 [2024-07-20 19:04:02.786202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.572 [2024-07-20 19:04:02.786230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.795944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.572 [2024-07-20 19:04:02.796178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.572 [2024-07-20 19:04:02.796204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.572 [2024-07-20 19:04:02.796219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.572 [2024-07-20 19:04:02.796232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.572 [2024-07-20 19:04:02.796261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.805957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.572 [2024-07-20 19:04:02.806171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.572 [2024-07-20 19:04:02.806196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.572 [2024-07-20 19:04:02.806210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.572 [2024-07-20 19:04:02.806224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.572 [2024-07-20 19:04:02.806258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.572 qpair failed and we were unable to recover it. 00:33:52.572 [2024-07-20 19:04:02.816075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.572 [2024-07-20 19:04:02.816288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.572 [2024-07-20 19:04:02.816314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.572 [2024-07-20 19:04:02.816329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.573 [2024-07-20 19:04:02.816342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.573 [2024-07-20 19:04:02.816370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.573 qpair failed and we were unable to recover it. 00:33:52.573 [2024-07-20 19:04:02.825990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.573 [2024-07-20 19:04:02.826207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.573 [2024-07-20 19:04:02.826232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.573 [2024-07-20 19:04:02.826247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.573 [2024-07-20 19:04:02.826260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.573 [2024-07-20 19:04:02.826289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.573 qpair failed and we were unable to recover it. 00:33:52.573 [2024-07-20 19:04:02.836032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.573 [2024-07-20 19:04:02.836242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.573 [2024-07-20 19:04:02.836268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.573 [2024-07-20 19:04:02.836288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.573 [2024-07-20 19:04:02.836302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.573 [2024-07-20 19:04:02.836331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.573 qpair failed and we were unable to recover it. 00:33:52.573 [2024-07-20 19:04:02.846162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.573 [2024-07-20 19:04:02.846375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.573 [2024-07-20 19:04:02.846401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.573 [2024-07-20 19:04:02.846416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.573 [2024-07-20 19:04:02.846428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.573 [2024-07-20 19:04:02.846457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.573 qpair failed and we were unable to recover it. 00:33:52.573 [2024-07-20 19:04:02.856137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.573 [2024-07-20 19:04:02.856355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.573 [2024-07-20 19:04:02.856386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.573 [2024-07-20 19:04:02.856401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.573 [2024-07-20 19:04:02.856414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.573 [2024-07-20 19:04:02.856443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.573 qpair failed and we were unable to recover it. 00:33:52.573 [2024-07-20 19:04:02.866269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.573 [2024-07-20 19:04:02.866517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.573 [2024-07-20 19:04:02.866553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.573 [2024-07-20 19:04:02.866578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.573 [2024-07-20 19:04:02.866601] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.573 [2024-07-20 19:04:02.866644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.573 qpair failed and we were unable to recover it. 00:33:52.831 [2024-07-20 19:04:02.876160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.831 [2024-07-20 19:04:02.876369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.831 [2024-07-20 19:04:02.876397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.831 [2024-07-20 19:04:02.876412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.831 [2024-07-20 19:04:02.876426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.831 [2024-07-20 19:04:02.876455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.831 qpair failed and we were unable to recover it. 00:33:52.831 [2024-07-20 19:04:02.886179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.831 [2024-07-20 19:04:02.886398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.831 [2024-07-20 19:04:02.886424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.831 [2024-07-20 19:04:02.886439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.831 [2024-07-20 19:04:02.886452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.831 [2024-07-20 19:04:02.886481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.831 qpair failed and we were unable to recover it. 00:33:52.831 [2024-07-20 19:04:02.896219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.831 [2024-07-20 19:04:02.896432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.831 [2024-07-20 19:04:02.896458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.831 [2024-07-20 19:04:02.896472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.831 [2024-07-20 19:04:02.896491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.831 [2024-07-20 19:04:02.896520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.831 qpair failed and we were unable to recover it. 00:33:52.831 [2024-07-20 19:04:02.906250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.831 [2024-07-20 19:04:02.906513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.831 [2024-07-20 19:04:02.906542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.831 [2024-07-20 19:04:02.906562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.831 [2024-07-20 19:04:02.906576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.831 [2024-07-20 19:04:02.906606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.831 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:02.916268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:02.916485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:02.916511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:02.916526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:02.916539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:02.916568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:02.926345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:02.926583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:02.926610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:02.926628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:02.926643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:02.926673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:02.936354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:02.936570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:02.936596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:02.936611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:02.936623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:02.936654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:02.946342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:02.946557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:02.946587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:02.946602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:02.946615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:02.946643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:02.956434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:02.956641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:02.956667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:02.956682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:02.956695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:02.956724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:02.966467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:02.966685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:02.966710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:02.966724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:02.966737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:02.966766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:02.976512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:02.976758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:02.976784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:02.976809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:02.976823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:02.976852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:02.986494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:02.986705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:02.986732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:02.986747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:02.986765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:02.986804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:02.996497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:02.996701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:02.996728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:02.996742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:02.996755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:02.996785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:03.006520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:03.006778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:03.006811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:03.006826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:03.006839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:03.006868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:03.016563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:03.016777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:03.016812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:03.016828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:03.016842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:03.016871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:03.026584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:03.026800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:03.026826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:03.026841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:03.026854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:03.026883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:03.036625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:03.036851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:03.036877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:03.036892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:03.036905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:03.036934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:03.046629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:03.046889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:03.046916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:03.046931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:03.046944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:03.046975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:03.056637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:03.056858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:03.056884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:03.056899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:03.056912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:03.056941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:03.066673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:03.066901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:03.066927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:03.066941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:03.066954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:03.066983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:03.076713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:03.076931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:03.076958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:03.076972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:03.076991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:03.077020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:03.086750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:03.086970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:03.086995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:03.087009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:03.087022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:03.087051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:03.096830] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:03.097040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:03.097066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:03.097080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:03.097094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:03.097123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:03.106813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:03.107025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:03.107051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:03.107065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:03.107079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:03.107110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:03.116850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:03.117059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:03.117086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:03.117105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:03.117119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:03.117148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:03.126880] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:03.127094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:03.127120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:03.127135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:03.127148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:03.127176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.832 [2024-07-20 19:04:03.136920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.832 [2024-07-20 19:04:03.137138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.832 [2024-07-20 19:04:03.137164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.832 [2024-07-20 19:04:03.137179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.832 [2024-07-20 19:04:03.137192] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.832 [2024-07-20 19:04:03.137222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.832 qpair failed and we were unable to recover it. 00:33:52.833 [2024-07-20 19:04:03.146929] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.833 [2024-07-20 19:04:03.147136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.833 [2024-07-20 19:04:03.147162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.833 [2024-07-20 19:04:03.147176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.833 [2024-07-20 19:04:03.147190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:52.833 [2024-07-20 19:04:03.147220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.833 qpair failed and we were unable to recover it. 00:33:53.090 [2024-07-20 19:04:03.156955] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.090 [2024-07-20 19:04:03.157172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.090 [2024-07-20 19:04:03.157200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.090 [2024-07-20 19:04:03.157215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.090 [2024-07-20 19:04:03.157229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.090 [2024-07-20 19:04:03.157259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.090 qpair failed and we were unable to recover it. 00:33:53.090 [2024-07-20 19:04:03.167026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.090 [2024-07-20 19:04:03.167274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.090 [2024-07-20 19:04:03.167301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.090 [2024-07-20 19:04:03.167322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.090 [2024-07-20 19:04:03.167336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.090 [2024-07-20 19:04:03.167366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.090 qpair failed and we were unable to recover it. 00:33:53.090 [2024-07-20 19:04:03.177006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.090 [2024-07-20 19:04:03.177222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.090 [2024-07-20 19:04:03.177249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.090 [2024-07-20 19:04:03.177264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.090 [2024-07-20 19:04:03.177277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.090 [2024-07-20 19:04:03.177306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.090 qpair failed and we were unable to recover it. 00:33:53.090 [2024-07-20 19:04:03.187053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.090 [2024-07-20 19:04:03.187263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.090 [2024-07-20 19:04:03.187288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.090 [2024-07-20 19:04:03.187302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.090 [2024-07-20 19:04:03.187316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.090 [2024-07-20 19:04:03.187344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.090 qpair failed and we were unable to recover it. 00:33:53.090 [2024-07-20 19:04:03.197065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.090 [2024-07-20 19:04:03.197268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.090 [2024-07-20 19:04:03.197294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.090 [2024-07-20 19:04:03.197308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.090 [2024-07-20 19:04:03.197321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.090 [2024-07-20 19:04:03.197349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.090 qpair failed and we were unable to recover it. 00:33:53.090 [2024-07-20 19:04:03.207119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.090 [2024-07-20 19:04:03.207344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.090 [2024-07-20 19:04:03.207369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.090 [2024-07-20 19:04:03.207383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.090 [2024-07-20 19:04:03.207396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.090 [2024-07-20 19:04:03.207425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.090 qpair failed and we were unable to recover it. 00:33:53.090 [2024-07-20 19:04:03.217114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.090 [2024-07-20 19:04:03.217323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.090 [2024-07-20 19:04:03.217349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.090 [2024-07-20 19:04:03.217363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.090 [2024-07-20 19:04:03.217377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.090 [2024-07-20 19:04:03.217406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.090 qpair failed and we were unable to recover it. 00:33:53.090 [2024-07-20 19:04:03.227250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.090 [2024-07-20 19:04:03.227450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.090 [2024-07-20 19:04:03.227475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.090 [2024-07-20 19:04:03.227489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.090 [2024-07-20 19:04:03.227503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.090 [2024-07-20 19:04:03.227531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.090 qpair failed and we were unable to recover it. 00:33:53.090 [2024-07-20 19:04:03.237198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.090 [2024-07-20 19:04:03.237401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.090 [2024-07-20 19:04:03.237427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.090 [2024-07-20 19:04:03.237441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.090 [2024-07-20 19:04:03.237455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.090 [2024-07-20 19:04:03.237483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.090 qpair failed and we were unable to recover it. 00:33:53.090 [2024-07-20 19:04:03.247210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.090 [2024-07-20 19:04:03.247423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.090 [2024-07-20 19:04:03.247449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.090 [2024-07-20 19:04:03.247463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.090 [2024-07-20 19:04:03.247477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.090 [2024-07-20 19:04:03.247505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.090 qpair failed and we were unable to recover it. 00:33:53.090 [2024-07-20 19:04:03.257236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.090 [2024-07-20 19:04:03.257443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.090 [2024-07-20 19:04:03.257468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.090 [2024-07-20 19:04:03.257489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.091 [2024-07-20 19:04:03.257503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.091 [2024-07-20 19:04:03.257534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.091 qpair failed and we were unable to recover it. 00:33:53.091 [2024-07-20 19:04:03.267272] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.091 [2024-07-20 19:04:03.267483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.091 [2024-07-20 19:04:03.267508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.091 [2024-07-20 19:04:03.267523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.091 [2024-07-20 19:04:03.267535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.091 [2024-07-20 19:04:03.267564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.091 qpair failed and we were unable to recover it. 00:33:53.091 [2024-07-20 19:04:03.277395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.091 [2024-07-20 19:04:03.277640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.091 [2024-07-20 19:04:03.277666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.091 [2024-07-20 19:04:03.277680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.091 [2024-07-20 19:04:03.277693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.091 [2024-07-20 19:04:03.277721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.091 qpair failed and we were unable to recover it. 00:33:53.091 [2024-07-20 19:04:03.287483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.091 [2024-07-20 19:04:03.287738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.091 [2024-07-20 19:04:03.287763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.091 [2024-07-20 19:04:03.287777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.091 [2024-07-20 19:04:03.287790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.091 [2024-07-20 19:04:03.287829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.091 qpair failed and we were unable to recover it. 00:33:53.091 [2024-07-20 19:04:03.297381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.091 [2024-07-20 19:04:03.297596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.091 [2024-07-20 19:04:03.297622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.091 [2024-07-20 19:04:03.297636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.091 [2024-07-20 19:04:03.297649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.091 [2024-07-20 19:04:03.297678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.091 qpair failed and we were unable to recover it. 00:33:53.091 [2024-07-20 19:04:03.307476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.091 [2024-07-20 19:04:03.307685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.091 [2024-07-20 19:04:03.307710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.091 [2024-07-20 19:04:03.307725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.091 [2024-07-20 19:04:03.307737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.091 [2024-07-20 19:04:03.307764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.091 qpair failed and we were unable to recover it. 00:33:53.091 [2024-07-20 19:04:03.317424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.091 [2024-07-20 19:04:03.317624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.091 [2024-07-20 19:04:03.317649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.091 [2024-07-20 19:04:03.317664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.091 [2024-07-20 19:04:03.317677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.091 [2024-07-20 19:04:03.317706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.091 qpair failed and we were unable to recover it. 00:33:53.091 [2024-07-20 19:04:03.327496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.091 [2024-07-20 19:04:03.327706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.091 [2024-07-20 19:04:03.327732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.091 [2024-07-20 19:04:03.327747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.091 [2024-07-20 19:04:03.327760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.091 [2024-07-20 19:04:03.327788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.091 qpair failed and we were unable to recover it. 00:33:53.091 [2024-07-20 19:04:03.337504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.091 [2024-07-20 19:04:03.337762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.091 [2024-07-20 19:04:03.337788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.091 [2024-07-20 19:04:03.337812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.091 [2024-07-20 19:04:03.337826] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.091 [2024-07-20 19:04:03.337855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.091 qpair failed and we were unable to recover it. 00:33:53.091 [2024-07-20 19:04:03.347522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.091 [2024-07-20 19:04:03.347726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.091 [2024-07-20 19:04:03.347751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.091 [2024-07-20 19:04:03.347772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.091 [2024-07-20 19:04:03.347786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.091 [2024-07-20 19:04:03.347823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.091 qpair failed and we were unable to recover it. 00:33:53.091 [2024-07-20 19:04:03.357547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.091 [2024-07-20 19:04:03.357761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.091 [2024-07-20 19:04:03.357785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.091 [2024-07-20 19:04:03.357808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.091 [2024-07-20 19:04:03.357822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.091 [2024-07-20 19:04:03.357854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.091 qpair failed and we were unable to recover it. 00:33:53.091 [2024-07-20 19:04:03.367588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.091 [2024-07-20 19:04:03.367804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.091 [2024-07-20 19:04:03.367830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.091 [2024-07-20 19:04:03.367845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.091 [2024-07-20 19:04:03.367858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.091 [2024-07-20 19:04:03.367886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.091 qpair failed and we were unable to recover it. 00:33:53.091 [2024-07-20 19:04:03.377616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.091 [2024-07-20 19:04:03.377839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.091 [2024-07-20 19:04:03.377865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.091 [2024-07-20 19:04:03.377879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.091 [2024-07-20 19:04:03.377892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.091 [2024-07-20 19:04:03.377921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.091 qpair failed and we were unable to recover it. 00:33:53.091 [2024-07-20 19:04:03.387635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.091 [2024-07-20 19:04:03.387850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.091 [2024-07-20 19:04:03.387876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.091 [2024-07-20 19:04:03.387891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.091 [2024-07-20 19:04:03.387904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.091 [2024-07-20 19:04:03.387932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.091 qpair failed and we were unable to recover it. 00:33:53.091 [2024-07-20 19:04:03.397708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.091 [2024-07-20 19:04:03.397927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.091 [2024-07-20 19:04:03.397952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.091 [2024-07-20 19:04:03.397967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.091 [2024-07-20 19:04:03.397980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.091 [2024-07-20 19:04:03.398008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.091 qpair failed and we were unable to recover it. 00:33:53.091 [2024-07-20 19:04:03.407710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.092 [2024-07-20 19:04:03.407933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.092 [2024-07-20 19:04:03.407958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.092 [2024-07-20 19:04:03.407973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.092 [2024-07-20 19:04:03.407986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.092 [2024-07-20 19:04:03.408014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.092 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.417724] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.417982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.418011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.418026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.418040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.418069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.427743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.427959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.427985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.428001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.428014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.428043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.437769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.437982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.438013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.438028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.438042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.438071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.447813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.448064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.448090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.448105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.448118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.448147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.457891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.458104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.458131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.458150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.458164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.458194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.467866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.468069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.468095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.468109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.468123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.468151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.477915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.478129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.478155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.478169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.478182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.478211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.487945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.488164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.488190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.488205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.488219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.488247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.497973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.498220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.498246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.498261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.498274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.498304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.508087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.508299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.508326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.508340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.508354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.508383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.518005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.518225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.518251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.518265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.518279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.518308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.528137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.528362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.528391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.528406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.528419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.528448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.538097] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.538316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.538342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.538356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.538370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.538398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.548230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.548436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.548462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.548477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.548490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.548518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.558192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.558401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.558427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.558441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.558455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.558484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.568143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.568352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.350 [2024-07-20 19:04:03.568378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.350 [2024-07-20 19:04:03.568393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.350 [2024-07-20 19:04:03.568406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.350 [2024-07-20 19:04:03.568440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.350 qpair failed and we were unable to recover it. 00:33:53.350 [2024-07-20 19:04:03.578173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.350 [2024-07-20 19:04:03.578386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.351 [2024-07-20 19:04:03.578411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.351 [2024-07-20 19:04:03.578426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.351 [2024-07-20 19:04:03.578439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.351 [2024-07-20 19:04:03.578468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.351 qpair failed and we were unable to recover it. 00:33:53.351 [2024-07-20 19:04:03.588198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.351 [2024-07-20 19:04:03.588466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.351 [2024-07-20 19:04:03.588493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.351 [2024-07-20 19:04:03.588507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.351 [2024-07-20 19:04:03.588524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.351 [2024-07-20 19:04:03.588555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.351 qpair failed and we were unable to recover it. 00:33:53.351 [2024-07-20 19:04:03.598222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.351 [2024-07-20 19:04:03.598439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.351 [2024-07-20 19:04:03.598465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.351 [2024-07-20 19:04:03.598480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.351 [2024-07-20 19:04:03.598493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.351 [2024-07-20 19:04:03.598522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.351 qpair failed and we were unable to recover it. 00:33:53.351 [2024-07-20 19:04:03.608310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.351 [2024-07-20 19:04:03.608532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.351 [2024-07-20 19:04:03.608559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.351 [2024-07-20 19:04:03.608577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.351 [2024-07-20 19:04:03.608591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.351 [2024-07-20 19:04:03.608621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.351 qpair failed and we were unable to recover it. 00:33:53.351 [2024-07-20 19:04:03.618286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.351 [2024-07-20 19:04:03.618501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.351 [2024-07-20 19:04:03.618532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.351 [2024-07-20 19:04:03.618548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.351 [2024-07-20 19:04:03.618561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.351 [2024-07-20 19:04:03.618590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.351 qpair failed and we were unable to recover it. 00:33:53.351 [2024-07-20 19:04:03.628354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.351 [2024-07-20 19:04:03.628562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.351 [2024-07-20 19:04:03.628587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.351 [2024-07-20 19:04:03.628602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.351 [2024-07-20 19:04:03.628615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.351 [2024-07-20 19:04:03.628643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.351 qpair failed and we were unable to recover it. 00:33:53.351 [2024-07-20 19:04:03.638340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.351 [2024-07-20 19:04:03.638544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.351 [2024-07-20 19:04:03.638569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.351 [2024-07-20 19:04:03.638584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.351 [2024-07-20 19:04:03.638597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.351 [2024-07-20 19:04:03.638625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.351 qpair failed and we were unable to recover it. 00:33:53.351 [2024-07-20 19:04:03.648372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.351 [2024-07-20 19:04:03.648584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.351 [2024-07-20 19:04:03.648610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.351 [2024-07-20 19:04:03.648625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.351 [2024-07-20 19:04:03.648638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.351 [2024-07-20 19:04:03.648666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.351 qpair failed and we were unable to recover it. 00:33:53.351 [2024-07-20 19:04:03.658406] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.351 [2024-07-20 19:04:03.658613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.351 [2024-07-20 19:04:03.658639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.351 [2024-07-20 19:04:03.658654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.351 [2024-07-20 19:04:03.658668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.351 [2024-07-20 19:04:03.658702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.351 qpair failed and we were unable to recover it. 00:33:53.351 [2024-07-20 19:04:03.668412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.351 [2024-07-20 19:04:03.668628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.351 [2024-07-20 19:04:03.668655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.351 [2024-07-20 19:04:03.668670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.351 [2024-07-20 19:04:03.668683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.351 [2024-07-20 19:04:03.668712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.351 qpair failed and we were unable to recover it. 00:33:53.610 [2024-07-20 19:04:03.678459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.610 [2024-07-20 19:04:03.678662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.610 [2024-07-20 19:04:03.678690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.610 [2024-07-20 19:04:03.678706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.610 [2024-07-20 19:04:03.678719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.610 [2024-07-20 19:04:03.678748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.610 qpair failed and we were unable to recover it. 00:33:53.610 [2024-07-20 19:04:03.688523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.610 [2024-07-20 19:04:03.688736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.610 [2024-07-20 19:04:03.688762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.610 [2024-07-20 19:04:03.688777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.610 [2024-07-20 19:04:03.688790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.610 [2024-07-20 19:04:03.688832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.610 qpair failed and we were unable to recover it. 00:33:53.610 [2024-07-20 19:04:03.698500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.610 [2024-07-20 19:04:03.698711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.610 [2024-07-20 19:04:03.698737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.610 [2024-07-20 19:04:03.698752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.610 [2024-07-20 19:04:03.698765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.610 [2024-07-20 19:04:03.698804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.610 qpair failed and we were unable to recover it. 00:33:53.610 [2024-07-20 19:04:03.708538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.610 [2024-07-20 19:04:03.708742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.610 [2024-07-20 19:04:03.708773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.610 [2024-07-20 19:04:03.708788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.610 [2024-07-20 19:04:03.708812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.610 [2024-07-20 19:04:03.708843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.610 qpair failed and we were unable to recover it. 00:33:53.610 [2024-07-20 19:04:03.718578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.610 [2024-07-20 19:04:03.718784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.610 [2024-07-20 19:04:03.718816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.610 [2024-07-20 19:04:03.718831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.610 [2024-07-20 19:04:03.718844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.610 [2024-07-20 19:04:03.718874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.610 qpair failed and we were unable to recover it. 00:33:53.610 [2024-07-20 19:04:03.728610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.610 [2024-07-20 19:04:03.728861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.610 [2024-07-20 19:04:03.728886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.610 [2024-07-20 19:04:03.728901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.610 [2024-07-20 19:04:03.728914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.610 [2024-07-20 19:04:03.728942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.610 qpair failed and we were unable to recover it. 00:33:53.610 [2024-07-20 19:04:03.738633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.610 [2024-07-20 19:04:03.738848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.610 [2024-07-20 19:04:03.738874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.610 [2024-07-20 19:04:03.738889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.610 [2024-07-20 19:04:03.738902] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.610 [2024-07-20 19:04:03.738930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.610 qpair failed and we were unable to recover it. 00:33:53.610 [2024-07-20 19:04:03.748647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.610 [2024-07-20 19:04:03.748863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.610 [2024-07-20 19:04:03.748889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.610 [2024-07-20 19:04:03.748904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.610 [2024-07-20 19:04:03.748917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.610 [2024-07-20 19:04:03.748951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.610 qpair failed and we were unable to recover it. 00:33:53.610 [2024-07-20 19:04:03.758664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.610 [2024-07-20 19:04:03.758877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.610 [2024-07-20 19:04:03.758903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.610 [2024-07-20 19:04:03.758917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.610 [2024-07-20 19:04:03.758930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.610 [2024-07-20 19:04:03.758959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.610 qpair failed and we were unable to recover it. 00:33:53.610 [2024-07-20 19:04:03.768719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.610 [2024-07-20 19:04:03.768947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.610 [2024-07-20 19:04:03.768973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.610 [2024-07-20 19:04:03.768987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.610 [2024-07-20 19:04:03.769000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.610 [2024-07-20 19:04:03.769028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.610 qpair failed and we were unable to recover it. 00:33:53.610 [2024-07-20 19:04:03.778766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.610 [2024-07-20 19:04:03.779031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.610 [2024-07-20 19:04:03.779057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.610 [2024-07-20 19:04:03.779071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.610 [2024-07-20 19:04:03.779084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.610 [2024-07-20 19:04:03.779113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.610 qpair failed and we were unable to recover it. 00:33:53.610 [2024-07-20 19:04:03.788741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.611 [2024-07-20 19:04:03.788966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.611 [2024-07-20 19:04:03.788992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.611 [2024-07-20 19:04:03.789006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.611 [2024-07-20 19:04:03.789019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.611 [2024-07-20 19:04:03.789047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.611 qpair failed and we were unable to recover it. 00:33:53.611 [2024-07-20 19:04:03.798783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.611 [2024-07-20 19:04:03.798993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.611 [2024-07-20 19:04:03.799024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.611 [2024-07-20 19:04:03.799039] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.611 [2024-07-20 19:04:03.799052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.611 [2024-07-20 19:04:03.799081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.611 qpair failed and we were unable to recover it. 00:33:53.611 [2024-07-20 19:04:03.808822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.611 [2024-07-20 19:04:03.809045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.611 [2024-07-20 19:04:03.809070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.611 [2024-07-20 19:04:03.809084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.611 [2024-07-20 19:04:03.809097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.611 [2024-07-20 19:04:03.809126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.611 qpair failed and we were unable to recover it. 00:33:53.611 [2024-07-20 19:04:03.818852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.611 [2024-07-20 19:04:03.819109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.611 [2024-07-20 19:04:03.819135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.611 [2024-07-20 19:04:03.819149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.611 [2024-07-20 19:04:03.819162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.611 [2024-07-20 19:04:03.819191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.611 qpair failed and we were unable to recover it. 00:33:53.611 [2024-07-20 19:04:03.828888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.611 [2024-07-20 19:04:03.829105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.611 [2024-07-20 19:04:03.829131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.611 [2024-07-20 19:04:03.829145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.611 [2024-07-20 19:04:03.829158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.611 [2024-07-20 19:04:03.829186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.611 qpair failed and we were unable to recover it. 00:33:53.611 [2024-07-20 19:04:03.838919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.611 [2024-07-20 19:04:03.839124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.611 [2024-07-20 19:04:03.839150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.611 [2024-07-20 19:04:03.839164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.611 [2024-07-20 19:04:03.839183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.611 [2024-07-20 19:04:03.839212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.611 qpair failed and we were unable to recover it. 00:33:53.611 [2024-07-20 19:04:03.848948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.611 [2024-07-20 19:04:03.849160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.611 [2024-07-20 19:04:03.849186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.611 [2024-07-20 19:04:03.849201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.611 [2024-07-20 19:04:03.849214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.611 [2024-07-20 19:04:03.849242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.611 qpair failed and we were unable to recover it. 00:33:53.611 [2024-07-20 19:04:03.858977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.611 [2024-07-20 19:04:03.859199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.611 [2024-07-20 19:04:03.859224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.611 [2024-07-20 19:04:03.859239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.611 [2024-07-20 19:04:03.859252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.611 [2024-07-20 19:04:03.859280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.611 qpair failed and we were unable to recover it. 00:33:53.611 [2024-07-20 19:04:03.869016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.611 [2024-07-20 19:04:03.869224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.611 [2024-07-20 19:04:03.869250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.611 [2024-07-20 19:04:03.869264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.611 [2024-07-20 19:04:03.869277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.611 [2024-07-20 19:04:03.869305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.611 qpair failed and we were unable to recover it. 00:33:53.611 [2024-07-20 19:04:03.879066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.611 [2024-07-20 19:04:03.879271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.611 [2024-07-20 19:04:03.879296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.611 [2024-07-20 19:04:03.879311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.611 [2024-07-20 19:04:03.879324] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.611 [2024-07-20 19:04:03.879352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.611 qpair failed and we were unable to recover it. 00:33:53.611 [2024-07-20 19:04:03.889071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.611 [2024-07-20 19:04:03.889290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.611 [2024-07-20 19:04:03.889316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.611 [2024-07-20 19:04:03.889330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.611 [2024-07-20 19:04:03.889344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.611 [2024-07-20 19:04:03.889372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.611 qpair failed and we were unable to recover it. 00:33:53.611 [2024-07-20 19:04:03.899089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.611 [2024-07-20 19:04:03.899304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.611 [2024-07-20 19:04:03.899330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.611 [2024-07-20 19:04:03.899345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.611 [2024-07-20 19:04:03.899358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.611 [2024-07-20 19:04:03.899386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.611 qpair failed and we were unable to recover it. 00:33:53.611 [2024-07-20 19:04:03.909119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.611 [2024-07-20 19:04:03.909331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.611 [2024-07-20 19:04:03.909357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.611 [2024-07-20 19:04:03.909371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.611 [2024-07-20 19:04:03.909384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.611 [2024-07-20 19:04:03.909411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.611 qpair failed and we were unable to recover it. 00:33:53.611 [2024-07-20 19:04:03.919147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.611 [2024-07-20 19:04:03.919364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.611 [2024-07-20 19:04:03.919390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.611 [2024-07-20 19:04:03.919404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.611 [2024-07-20 19:04:03.919417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.611 [2024-07-20 19:04:03.919445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.611 qpair failed and we were unable to recover it. 00:33:53.611 [2024-07-20 19:04:03.929201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.611 [2024-07-20 19:04:03.929414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.611 [2024-07-20 19:04:03.929442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.611 [2024-07-20 19:04:03.929457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.612 [2024-07-20 19:04:03.929476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.612 [2024-07-20 19:04:03.929506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.612 qpair failed and we were unable to recover it. 00:33:53.869 [2024-07-20 19:04:03.939199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.869 [2024-07-20 19:04:03.939417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.869 [2024-07-20 19:04:03.939445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.869 [2024-07-20 19:04:03.939460] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.869 [2024-07-20 19:04:03.939474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.869 [2024-07-20 19:04:03.939503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.869 qpair failed and we were unable to recover it. 00:33:53.869 [2024-07-20 19:04:03.949206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.869 [2024-07-20 19:04:03.949419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.869 [2024-07-20 19:04:03.949446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.869 [2024-07-20 19:04:03.949460] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.869 [2024-07-20 19:04:03.949474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.869 [2024-07-20 19:04:03.949503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.869 qpair failed and we were unable to recover it. 00:33:53.869 [2024-07-20 19:04:03.959241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.869 [2024-07-20 19:04:03.959449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.869 [2024-07-20 19:04:03.959475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.869 [2024-07-20 19:04:03.959490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.869 [2024-07-20 19:04:03.959503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.869 [2024-07-20 19:04:03.959531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.869 qpair failed and we were unable to recover it. 00:33:53.869 [2024-07-20 19:04:03.969280] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.869 [2024-07-20 19:04:03.969493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.869 [2024-07-20 19:04:03.969519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.869 [2024-07-20 19:04:03.969533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.869 [2024-07-20 19:04:03.969547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.869 [2024-07-20 19:04:03.969575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.869 qpair failed and we were unable to recover it. 00:33:53.869 [2024-07-20 19:04:03.979339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.869 [2024-07-20 19:04:03.979591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.869 [2024-07-20 19:04:03.979617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.869 [2024-07-20 19:04:03.979632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.869 [2024-07-20 19:04:03.979645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.869 [2024-07-20 19:04:03.979674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.869 qpair failed and we were unable to recover it. 00:33:53.869 [2024-07-20 19:04:03.989326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.869 [2024-07-20 19:04:03.989533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.869 [2024-07-20 19:04:03.989559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.869 [2024-07-20 19:04:03.989573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.869 [2024-07-20 19:04:03.989586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.869 [2024-07-20 19:04:03.989614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.869 qpair failed and we were unable to recover it. 00:33:53.869 [2024-07-20 19:04:03.999355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.869 [2024-07-20 19:04:03.999572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.869 [2024-07-20 19:04:03.999598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.869 [2024-07-20 19:04:03.999612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.869 [2024-07-20 19:04:03.999626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.869 [2024-07-20 19:04:03.999654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.869 qpair failed and we were unable to recover it. 00:33:53.869 [2024-07-20 19:04:04.009441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.869 [2024-07-20 19:04:04.009731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.869 [2024-07-20 19:04:04.009758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.869 [2024-07-20 19:04:04.009772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.869 [2024-07-20 19:04:04.009785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.869 [2024-07-20 19:04:04.009821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.869 qpair failed and we were unable to recover it. 00:33:53.869 [2024-07-20 19:04:04.019468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.869 [2024-07-20 19:04:04.019685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.869 [2024-07-20 19:04:04.019711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.869 [2024-07-20 19:04:04.019725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.869 [2024-07-20 19:04:04.019743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.869 [2024-07-20 19:04:04.019772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.869 qpair failed and we were unable to recover it. 00:33:53.869 [2024-07-20 19:04:04.029500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.869 [2024-07-20 19:04:04.029713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.869 [2024-07-20 19:04:04.029739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.869 [2024-07-20 19:04:04.029753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.869 [2024-07-20 19:04:04.029766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.869 [2024-07-20 19:04:04.029802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.869 qpair failed and we were unable to recover it. 00:33:53.869 [2024-07-20 19:04:04.039465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.869 [2024-07-20 19:04:04.039673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.870 [2024-07-20 19:04:04.039698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.870 [2024-07-20 19:04:04.039713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.870 [2024-07-20 19:04:04.039726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.870 [2024-07-20 19:04:04.039754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.870 qpair failed and we were unable to recover it. 00:33:53.870 [2024-07-20 19:04:04.049559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.870 [2024-07-20 19:04:04.049775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.870 [2024-07-20 19:04:04.049809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.870 [2024-07-20 19:04:04.049824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.870 [2024-07-20 19:04:04.049837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.870 [2024-07-20 19:04:04.049866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.870 qpair failed and we were unable to recover it. 00:33:53.870 [2024-07-20 19:04:04.059564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.870 [2024-07-20 19:04:04.059777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.870 [2024-07-20 19:04:04.059813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.870 [2024-07-20 19:04:04.059829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.870 [2024-07-20 19:04:04.059842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.870 [2024-07-20 19:04:04.059871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.870 qpair failed and we were unable to recover it. 00:33:53.870 [2024-07-20 19:04:04.069587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.870 [2024-07-20 19:04:04.069809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.870 [2024-07-20 19:04:04.069836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.870 [2024-07-20 19:04:04.069850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.870 [2024-07-20 19:04:04.069863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.870 [2024-07-20 19:04:04.069892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.870 qpair failed and we were unable to recover it. 00:33:53.870 [2024-07-20 19:04:04.079643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.870 [2024-07-20 19:04:04.079903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.870 [2024-07-20 19:04:04.079929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.870 [2024-07-20 19:04:04.079943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.870 [2024-07-20 19:04:04.079957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.870 [2024-07-20 19:04:04.079985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.870 qpair failed and we were unable to recover it. 00:33:53.870 [2024-07-20 19:04:04.089624] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.870 [2024-07-20 19:04:04.089840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.870 [2024-07-20 19:04:04.089865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.870 [2024-07-20 19:04:04.089880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.870 [2024-07-20 19:04:04.089893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.870 [2024-07-20 19:04:04.089921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.870 qpair failed and we were unable to recover it. 00:33:53.870 [2024-07-20 19:04:04.099666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.870 [2024-07-20 19:04:04.099898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.870 [2024-07-20 19:04:04.099924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.870 [2024-07-20 19:04:04.099938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.870 [2024-07-20 19:04:04.099952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.870 [2024-07-20 19:04:04.099982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.870 qpair failed and we were unable to recover it. 00:33:53.870 [2024-07-20 19:04:04.109692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.870 [2024-07-20 19:04:04.109908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.870 [2024-07-20 19:04:04.109933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.870 [2024-07-20 19:04:04.109953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.870 [2024-07-20 19:04:04.109967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.870 [2024-07-20 19:04:04.109997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.870 qpair failed and we were unable to recover it. 00:33:53.870 [2024-07-20 19:04:04.119749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.870 [2024-07-20 19:04:04.119967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.870 [2024-07-20 19:04:04.119993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.870 [2024-07-20 19:04:04.120007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.870 [2024-07-20 19:04:04.120020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.870 [2024-07-20 19:04:04.120051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.870 qpair failed and we were unable to recover it. 00:33:53.870 [2024-07-20 19:04:04.129745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.870 [2024-07-20 19:04:04.130009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.870 [2024-07-20 19:04:04.130035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.870 [2024-07-20 19:04:04.130049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.870 [2024-07-20 19:04:04.130063] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.870 [2024-07-20 19:04:04.130092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.870 qpair failed and we were unable to recover it. 00:33:53.870 [2024-07-20 19:04:04.139773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.870 [2024-07-20 19:04:04.139998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.870 [2024-07-20 19:04:04.140023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.870 [2024-07-20 19:04:04.140038] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.870 [2024-07-20 19:04:04.140051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.870 [2024-07-20 19:04:04.140079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.870 qpair failed and we were unable to recover it. 00:33:53.870 [2024-07-20 19:04:04.149800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.870 [2024-07-20 19:04:04.150025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.870 [2024-07-20 19:04:04.150060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.870 [2024-07-20 19:04:04.150074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.870 [2024-07-20 19:04:04.150087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.870 [2024-07-20 19:04:04.150115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.870 qpair failed and we were unable to recover it. 00:33:53.870 [2024-07-20 19:04:04.159839] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.870 [2024-07-20 19:04:04.160048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.870 [2024-07-20 19:04:04.160073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.870 [2024-07-20 19:04:04.160088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.870 [2024-07-20 19:04:04.160101] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.870 [2024-07-20 19:04:04.160130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.870 qpair failed and we were unable to recover it. 00:33:53.870 [2024-07-20 19:04:04.169873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.870 [2024-07-20 19:04:04.170090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.870 [2024-07-20 19:04:04.170115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.870 [2024-07-20 19:04:04.170129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.870 [2024-07-20 19:04:04.170142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.870 [2024-07-20 19:04:04.170170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.870 qpair failed and we were unable to recover it. 00:33:53.870 [2024-07-20 19:04:04.179929] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.870 [2024-07-20 19:04:04.180147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.870 [2024-07-20 19:04:04.180173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.870 [2024-07-20 19:04:04.180188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.871 [2024-07-20 19:04:04.180201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.871 [2024-07-20 19:04:04.180230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.871 qpair failed and we were unable to recover it. 00:33:53.871 [2024-07-20 19:04:04.189927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.871 [2024-07-20 19:04:04.190144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.871 [2024-07-20 19:04:04.190172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.871 [2024-07-20 19:04:04.190187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.871 [2024-07-20 19:04:04.190200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:53.871 [2024-07-20 19:04:04.190229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:53.871 qpair failed and we were unable to recover it. 00:33:54.128 [2024-07-20 19:04:04.199991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.128 [2024-07-20 19:04:04.200203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.128 [2024-07-20 19:04:04.200232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.128 [2024-07-20 19:04:04.200253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.128 [2024-07-20 19:04:04.200266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.128 [2024-07-20 19:04:04.200297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.128 qpair failed and we were unable to recover it. 00:33:54.128 [2024-07-20 19:04:04.209992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.128 [2024-07-20 19:04:04.210214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.128 [2024-07-20 19:04:04.210240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.128 [2024-07-20 19:04:04.210254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.128 [2024-07-20 19:04:04.210267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.128 [2024-07-20 19:04:04.210296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.128 qpair failed and we were unable to recover it. 00:33:54.128 [2024-07-20 19:04:04.220031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.128 [2024-07-20 19:04:04.220256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.128 [2024-07-20 19:04:04.220282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.128 [2024-07-20 19:04:04.220297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.128 [2024-07-20 19:04:04.220310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.128 [2024-07-20 19:04:04.220339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.128 qpair failed and we were unable to recover it. 00:33:54.128 [2024-07-20 19:04:04.230048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.128 [2024-07-20 19:04:04.230255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.128 [2024-07-20 19:04:04.230281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.128 [2024-07-20 19:04:04.230295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.128 [2024-07-20 19:04:04.230308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.128 [2024-07-20 19:04:04.230337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.128 qpair failed and we were unable to recover it. 00:33:54.128 [2024-07-20 19:04:04.240103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.128 [2024-07-20 19:04:04.240310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.128 [2024-07-20 19:04:04.240335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.128 [2024-07-20 19:04:04.240350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.128 [2024-07-20 19:04:04.240363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.128 [2024-07-20 19:04:04.240391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.128 qpair failed and we were unable to recover it. 00:33:54.128 [2024-07-20 19:04:04.250110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.128 [2024-07-20 19:04:04.250325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.128 [2024-07-20 19:04:04.250350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.128 [2024-07-20 19:04:04.250364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.128 [2024-07-20 19:04:04.250377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.128 [2024-07-20 19:04:04.250405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.128 qpair failed and we were unable to recover it. 00:33:54.128 [2024-07-20 19:04:04.260132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.128 [2024-07-20 19:04:04.260363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.129 [2024-07-20 19:04:04.260389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.129 [2024-07-20 19:04:04.260404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.129 [2024-07-20 19:04:04.260417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.129 [2024-07-20 19:04:04.260446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.129 qpair failed and we were unable to recover it. 00:33:54.129 [2024-07-20 19:04:04.270155] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.129 [2024-07-20 19:04:04.270363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.129 [2024-07-20 19:04:04.270388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.129 [2024-07-20 19:04:04.270403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.129 [2024-07-20 19:04:04.270416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.129 [2024-07-20 19:04:04.270445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.129 qpair failed and we were unable to recover it. 00:33:54.129 [2024-07-20 19:04:04.280179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.129 [2024-07-20 19:04:04.280395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.129 [2024-07-20 19:04:04.280420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.129 [2024-07-20 19:04:04.280435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.129 [2024-07-20 19:04:04.280448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.129 [2024-07-20 19:04:04.280476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.129 qpair failed and we were unable to recover it. 00:33:54.129 [2024-07-20 19:04:04.290235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.129 [2024-07-20 19:04:04.290446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.129 [2024-07-20 19:04:04.290471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.129 [2024-07-20 19:04:04.290492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.129 [2024-07-20 19:04:04.290506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.129 [2024-07-20 19:04:04.290536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.129 qpair failed and we were unable to recover it. 00:33:54.129 [2024-07-20 19:04:04.300250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.129 [2024-07-20 19:04:04.300474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.129 [2024-07-20 19:04:04.300499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.129 [2024-07-20 19:04:04.300514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.129 [2024-07-20 19:04:04.300526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.129 [2024-07-20 19:04:04.300555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.129 qpair failed and we were unable to recover it. 00:33:54.129 [2024-07-20 19:04:04.310278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.129 [2024-07-20 19:04:04.310536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.129 [2024-07-20 19:04:04.310563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.129 [2024-07-20 19:04:04.310577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.129 [2024-07-20 19:04:04.310593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.129 [2024-07-20 19:04:04.310623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.129 qpair failed and we were unable to recover it. 00:33:54.129 [2024-07-20 19:04:04.320338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.129 [2024-07-20 19:04:04.320547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.129 [2024-07-20 19:04:04.320573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.129 [2024-07-20 19:04:04.320591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.129 [2024-07-20 19:04:04.320604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.129 [2024-07-20 19:04:04.320633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.129 qpair failed and we were unable to recover it. 00:33:54.129 [2024-07-20 19:04:04.330366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.129 [2024-07-20 19:04:04.330607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.129 [2024-07-20 19:04:04.330632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.129 [2024-07-20 19:04:04.330647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.129 [2024-07-20 19:04:04.330660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.129 [2024-07-20 19:04:04.330688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.129 qpair failed and we were unable to recover it. 00:33:54.129 [2024-07-20 19:04:04.340368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.129 [2024-07-20 19:04:04.340573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.129 [2024-07-20 19:04:04.340599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.129 [2024-07-20 19:04:04.340613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.129 [2024-07-20 19:04:04.340627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.129 [2024-07-20 19:04:04.340655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.129 qpair failed and we were unable to recover it. 00:33:54.129 [2024-07-20 19:04:04.350417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.129 [2024-07-20 19:04:04.350628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.129 [2024-07-20 19:04:04.350653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.129 [2024-07-20 19:04:04.350667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.129 [2024-07-20 19:04:04.350679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.129 [2024-07-20 19:04:04.350708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.129 qpair failed and we were unable to recover it. 00:33:54.129 [2024-07-20 19:04:04.360405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.129 [2024-07-20 19:04:04.360632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.129 [2024-07-20 19:04:04.360658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.129 [2024-07-20 19:04:04.360672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.129 [2024-07-20 19:04:04.360686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.129 [2024-07-20 19:04:04.360718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.129 qpair failed and we were unable to recover it. 00:33:54.129 [2024-07-20 19:04:04.370482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.129 [2024-07-20 19:04:04.370738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.129 [2024-07-20 19:04:04.370764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.129 [2024-07-20 19:04:04.370778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.129 [2024-07-20 19:04:04.370791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.129 [2024-07-20 19:04:04.370829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.129 qpair failed and we were unable to recover it. 00:33:54.129 [2024-07-20 19:04:04.380543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.129 [2024-07-20 19:04:04.380812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.129 [2024-07-20 19:04:04.380844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.129 [2024-07-20 19:04:04.380860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.129 [2024-07-20 19:04:04.380874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.129 [2024-07-20 19:04:04.380904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.129 qpair failed and we were unable to recover it. 00:33:54.129 [2024-07-20 19:04:04.390528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.129 [2024-07-20 19:04:04.390749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.129 [2024-07-20 19:04:04.390775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.129 [2024-07-20 19:04:04.390789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.129 [2024-07-20 19:04:04.390811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.129 [2024-07-20 19:04:04.390842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.129 qpair failed and we were unable to recover it. 00:33:54.129 [2024-07-20 19:04:04.400580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.129 [2024-07-20 19:04:04.400806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.129 [2024-07-20 19:04:04.400831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.129 [2024-07-20 19:04:04.400846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.129 [2024-07-20 19:04:04.400859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.130 [2024-07-20 19:04:04.400888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.130 qpair failed and we were unable to recover it. 00:33:54.130 [2024-07-20 19:04:04.410606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.130 [2024-07-20 19:04:04.410838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.130 [2024-07-20 19:04:04.410863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.130 [2024-07-20 19:04:04.410878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.130 [2024-07-20 19:04:04.410891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.130 [2024-07-20 19:04:04.410920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.130 qpair failed and we were unable to recover it. 00:33:54.130 [2024-07-20 19:04:04.420598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.130 [2024-07-20 19:04:04.420819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.130 [2024-07-20 19:04:04.420846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.130 [2024-07-20 19:04:04.420860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.130 [2024-07-20 19:04:04.420873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.130 [2024-07-20 19:04:04.420903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.130 qpair failed and we were unable to recover it. 00:33:54.130 [2024-07-20 19:04:04.430638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.130 [2024-07-20 19:04:04.430891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.130 [2024-07-20 19:04:04.430917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.130 [2024-07-20 19:04:04.430932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.130 [2024-07-20 19:04:04.430946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.130 [2024-07-20 19:04:04.430974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.130 qpair failed and we were unable to recover it. 00:33:54.130 [2024-07-20 19:04:04.440657] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.130 [2024-07-20 19:04:04.440872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.130 [2024-07-20 19:04:04.440897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.130 [2024-07-20 19:04:04.440912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.130 [2024-07-20 19:04:04.440926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.130 [2024-07-20 19:04:04.440954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.130 qpair failed and we were unable to recover it. 00:33:54.130 [2024-07-20 19:04:04.450800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.130 [2024-07-20 19:04:04.451048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.130 [2024-07-20 19:04:04.451085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.130 [2024-07-20 19:04:04.451113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.130 [2024-07-20 19:04:04.451141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.130 [2024-07-20 19:04:04.451178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.130 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 19:04:04.460706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.388 [2024-07-20 19:04:04.460930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.388 [2024-07-20 19:04:04.460958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.388 [2024-07-20 19:04:04.460973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.388 [2024-07-20 19:04:04.460987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.388 [2024-07-20 19:04:04.461016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 19:04:04.470734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.388 [2024-07-20 19:04:04.470943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.388 [2024-07-20 19:04:04.470975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.388 [2024-07-20 19:04:04.470990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.388 [2024-07-20 19:04:04.471003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.388 [2024-07-20 19:04:04.471032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 19:04:04.480832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.388 [2024-07-20 19:04:04.481095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.388 [2024-07-20 19:04:04.481121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.388 [2024-07-20 19:04:04.481135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.388 [2024-07-20 19:04:04.481148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.388 [2024-07-20 19:04:04.481177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 19:04:04.490836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.388 [2024-07-20 19:04:04.491081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.388 [2024-07-20 19:04:04.491106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.388 [2024-07-20 19:04:04.491121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.388 [2024-07-20 19:04:04.491134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.388 [2024-07-20 19:04:04.491163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 19:04:04.500848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.388 [2024-07-20 19:04:04.501098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.388 [2024-07-20 19:04:04.501124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.388 [2024-07-20 19:04:04.501139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.388 [2024-07-20 19:04:04.501152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.388 [2024-07-20 19:04:04.501180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 19:04:04.510859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.388 [2024-07-20 19:04:04.511070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.388 [2024-07-20 19:04:04.511096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.388 [2024-07-20 19:04:04.511111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.388 [2024-07-20 19:04:04.511123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.388 [2024-07-20 19:04:04.511158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 19:04:04.520900] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.388 [2024-07-20 19:04:04.521127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.388 [2024-07-20 19:04:04.521152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.388 [2024-07-20 19:04:04.521167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.388 [2024-07-20 19:04:04.521180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.388 [2024-07-20 19:04:04.521208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 19:04:04.530978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.388 [2024-07-20 19:04:04.531225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.388 [2024-07-20 19:04:04.531250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.388 [2024-07-20 19:04:04.531265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.388 [2024-07-20 19:04:04.531277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.388 [2024-07-20 19:04:04.531306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 19:04:04.540963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.389 [2024-07-20 19:04:04.541178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.389 [2024-07-20 19:04:04.541204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.389 [2024-07-20 19:04:04.541219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.389 [2024-07-20 19:04:04.541232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.389 [2024-07-20 19:04:04.541260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 19:04:04.550991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.389 [2024-07-20 19:04:04.551215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.389 [2024-07-20 19:04:04.551241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.389 [2024-07-20 19:04:04.551255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.389 [2024-07-20 19:04:04.551268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.389 [2024-07-20 19:04:04.551296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 19:04:04.561042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.389 [2024-07-20 19:04:04.561253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.389 [2024-07-20 19:04:04.561283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.389 [2024-07-20 19:04:04.561298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.389 [2024-07-20 19:04:04.561311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.389 [2024-07-20 19:04:04.561339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 19:04:04.571060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.389 [2024-07-20 19:04:04.571273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.389 [2024-07-20 19:04:04.571299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.389 [2024-07-20 19:04:04.571313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.389 [2024-07-20 19:04:04.571326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.389 [2024-07-20 19:04:04.571354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 19:04:04.581086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.389 [2024-07-20 19:04:04.581392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.389 [2024-07-20 19:04:04.581417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.389 [2024-07-20 19:04:04.581432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.389 [2024-07-20 19:04:04.581445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.389 [2024-07-20 19:04:04.581473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 19:04:04.591117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.389 [2024-07-20 19:04:04.591328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.389 [2024-07-20 19:04:04.591354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.389 [2024-07-20 19:04:04.591368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.389 [2024-07-20 19:04:04.591381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.389 [2024-07-20 19:04:04.591409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 19:04:04.601126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.389 [2024-07-20 19:04:04.601327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.389 [2024-07-20 19:04:04.601352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.389 [2024-07-20 19:04:04.601367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.389 [2024-07-20 19:04:04.601380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.389 [2024-07-20 19:04:04.601413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 19:04:04.611162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.389 [2024-07-20 19:04:04.611417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.389 [2024-07-20 19:04:04.611443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.389 [2024-07-20 19:04:04.611458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.389 [2024-07-20 19:04:04.611471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.389 [2024-07-20 19:04:04.611498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 19:04:04.621199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.389 [2024-07-20 19:04:04.621409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.389 [2024-07-20 19:04:04.621435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.389 [2024-07-20 19:04:04.621450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.389 [2024-07-20 19:04:04.621464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.389 [2024-07-20 19:04:04.621492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 19:04:04.631220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.389 [2024-07-20 19:04:04.631425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.389 [2024-07-20 19:04:04.631451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.389 [2024-07-20 19:04:04.631465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.389 [2024-07-20 19:04:04.631479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.389 [2024-07-20 19:04:04.631507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 19:04:04.641241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.389 [2024-07-20 19:04:04.641452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.389 [2024-07-20 19:04:04.641478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.389 [2024-07-20 19:04:04.641493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.389 [2024-07-20 19:04:04.641506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.389 [2024-07-20 19:04:04.641535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 19:04:04.651266] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.389 [2024-07-20 19:04:04.651488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.389 [2024-07-20 19:04:04.651518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.389 [2024-07-20 19:04:04.651534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.389 [2024-07-20 19:04:04.651547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.389 [2024-07-20 19:04:04.651576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 19:04:04.661295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.389 [2024-07-20 19:04:04.661503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.389 [2024-07-20 19:04:04.661529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.389 [2024-07-20 19:04:04.661543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.389 [2024-07-20 19:04:04.661557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.389 [2024-07-20 19:04:04.661585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 19:04:04.671331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.389 [2024-07-20 19:04:04.671541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.389 [2024-07-20 19:04:04.671566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.389 [2024-07-20 19:04:04.671580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.389 [2024-07-20 19:04:04.671593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.389 [2024-07-20 19:04:04.671621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 19:04:04.681356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.389 [2024-07-20 19:04:04.681562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.389 [2024-07-20 19:04:04.681587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.389 [2024-07-20 19:04:04.681602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.390 [2024-07-20 19:04:04.681615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.390 [2024-07-20 19:04:04.681643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 19:04:04.691409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.390 [2024-07-20 19:04:04.691671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.390 [2024-07-20 19:04:04.691697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.390 [2024-07-20 19:04:04.691711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.390 [2024-07-20 19:04:04.691725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.390 [2024-07-20 19:04:04.691759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 19:04:04.701437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.390 [2024-07-20 19:04:04.701727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.390 [2024-07-20 19:04:04.701754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.390 [2024-07-20 19:04:04.701768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.390 [2024-07-20 19:04:04.701781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.390 [2024-07-20 19:04:04.701819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.648 [2024-07-20 19:04:04.711434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.648 [2024-07-20 19:04:04.711688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.648 [2024-07-20 19:04:04.711718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.648 [2024-07-20 19:04:04.711733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.648 [2024-07-20 19:04:04.711747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.648 [2024-07-20 19:04:04.711777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-07-20 19:04:04.721458] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.648 [2024-07-20 19:04:04.721661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.648 [2024-07-20 19:04:04.721689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.648 [2024-07-20 19:04:04.721705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.648 [2024-07-20 19:04:04.721718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.648 [2024-07-20 19:04:04.721747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-07-20 19:04:04.731527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.648 [2024-07-20 19:04:04.731749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.648 [2024-07-20 19:04:04.731775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.648 [2024-07-20 19:04:04.731790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.648 [2024-07-20 19:04:04.731812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.648 [2024-07-20 19:04:04.731842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-07-20 19:04:04.741636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.648 [2024-07-20 19:04:04.741861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.648 [2024-07-20 19:04:04.741895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.648 [2024-07-20 19:04:04.741910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.648 [2024-07-20 19:04:04.741923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.648 [2024-07-20 19:04:04.741952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-07-20 19:04:04.751626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.648 [2024-07-20 19:04:04.751877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.648 [2024-07-20 19:04:04.751904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.648 [2024-07-20 19:04:04.751919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.648 [2024-07-20 19:04:04.751948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.648 [2024-07-20 19:04:04.751981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-07-20 19:04:04.761621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.648 [2024-07-20 19:04:04.761840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.648 [2024-07-20 19:04:04.761866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.648 [2024-07-20 19:04:04.761881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.648 [2024-07-20 19:04:04.761894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.648 [2024-07-20 19:04:04.761926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-07-20 19:04:04.771748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.648 [2024-07-20 19:04:04.771973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.648 [2024-07-20 19:04:04.771998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.648 [2024-07-20 19:04:04.772013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.648 [2024-07-20 19:04:04.772026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.648 [2024-07-20 19:04:04.772055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.648 qpair failed and we were unable to recover it. 00:33:54.648 [2024-07-20 19:04:04.781724] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.649 [2024-07-20 19:04:04.781997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.649 [2024-07-20 19:04:04.782026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.649 [2024-07-20 19:04:04.782041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.649 [2024-07-20 19:04:04.782063] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.649 [2024-07-20 19:04:04.782095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-07-20 19:04:04.791678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.649 [2024-07-20 19:04:04.791886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.649 [2024-07-20 19:04:04.791912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.649 [2024-07-20 19:04:04.791927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.649 [2024-07-20 19:04:04.791940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.649 [2024-07-20 19:04:04.791971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-07-20 19:04:04.801712] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.649 [2024-07-20 19:04:04.801930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.649 [2024-07-20 19:04:04.801957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.649 [2024-07-20 19:04:04.801971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.649 [2024-07-20 19:04:04.801984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.649 [2024-07-20 19:04:04.802014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-07-20 19:04:04.811774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.649 [2024-07-20 19:04:04.812002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.649 [2024-07-20 19:04:04.812029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.649 [2024-07-20 19:04:04.812043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.649 [2024-07-20 19:04:04.812057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.649 [2024-07-20 19:04:04.812086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-07-20 19:04:04.821800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.649 [2024-07-20 19:04:04.822052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.649 [2024-07-20 19:04:04.822078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.649 [2024-07-20 19:04:04.822093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.649 [2024-07-20 19:04:04.822106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.649 [2024-07-20 19:04:04.822134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-07-20 19:04:04.831775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.649 [2024-07-20 19:04:04.832021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.649 [2024-07-20 19:04:04.832047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.649 [2024-07-20 19:04:04.832062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.649 [2024-07-20 19:04:04.832075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.649 [2024-07-20 19:04:04.832104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-07-20 19:04:04.841867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.649 [2024-07-20 19:04:04.842124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.649 [2024-07-20 19:04:04.842152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.649 [2024-07-20 19:04:04.842167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.649 [2024-07-20 19:04:04.842180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.649 [2024-07-20 19:04:04.842210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-07-20 19:04:04.851855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.649 [2024-07-20 19:04:04.852129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.649 [2024-07-20 19:04:04.852155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.649 [2024-07-20 19:04:04.852170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.649 [2024-07-20 19:04:04.852183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.649 [2024-07-20 19:04:04.852211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-07-20 19:04:04.861911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.649 [2024-07-20 19:04:04.862130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.649 [2024-07-20 19:04:04.862156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.649 [2024-07-20 19:04:04.862170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.649 [2024-07-20 19:04:04.862183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.649 [2024-07-20 19:04:04.862212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-07-20 19:04:04.871916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.649 [2024-07-20 19:04:04.872156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.649 [2024-07-20 19:04:04.872181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.649 [2024-07-20 19:04:04.872195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.649 [2024-07-20 19:04:04.872215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.649 [2024-07-20 19:04:04.872244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-07-20 19:04:04.881958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.649 [2024-07-20 19:04:04.882166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.649 [2024-07-20 19:04:04.882191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.649 [2024-07-20 19:04:04.882205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.649 [2024-07-20 19:04:04.882219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.649 [2024-07-20 19:04:04.882246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-07-20 19:04:04.891986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.649 [2024-07-20 19:04:04.892198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.649 [2024-07-20 19:04:04.892223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.649 [2024-07-20 19:04:04.892237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.649 [2024-07-20 19:04:04.892250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.649 [2024-07-20 19:04:04.892279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-07-20 19:04:04.902018] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.649 [2024-07-20 19:04:04.902238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.649 [2024-07-20 19:04:04.902263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.649 [2024-07-20 19:04:04.902278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.649 [2024-07-20 19:04:04.902291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.649 [2024-07-20 19:04:04.902320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-07-20 19:04:04.912022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.649 [2024-07-20 19:04:04.912282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.649 [2024-07-20 19:04:04.912314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.649 [2024-07-20 19:04:04.912328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.649 [2024-07-20 19:04:04.912342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.649 [2024-07-20 19:04:04.912370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.649 qpair failed and we were unable to recover it. 00:33:54.649 [2024-07-20 19:04:04.922048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.649 [2024-07-20 19:04:04.922266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.649 [2024-07-20 19:04:04.922291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.649 [2024-07-20 19:04:04.922306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.650 [2024-07-20 19:04:04.922319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.650 [2024-07-20 19:04:04.922347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-07-20 19:04:04.932080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.650 [2024-07-20 19:04:04.932294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.650 [2024-07-20 19:04:04.932319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.650 [2024-07-20 19:04:04.932333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.650 [2024-07-20 19:04:04.932347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.650 [2024-07-20 19:04:04.932375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-07-20 19:04:04.942123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.650 [2024-07-20 19:04:04.942377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.650 [2024-07-20 19:04:04.942402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.650 [2024-07-20 19:04:04.942417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.650 [2024-07-20 19:04:04.942431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.650 [2024-07-20 19:04:04.942459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-07-20 19:04:04.952133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.650 [2024-07-20 19:04:04.952339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.650 [2024-07-20 19:04:04.952365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.650 [2024-07-20 19:04:04.952379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.650 [2024-07-20 19:04:04.952392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.650 [2024-07-20 19:04:04.952420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.650 [2024-07-20 19:04:04.962175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.650 [2024-07-20 19:04:04.962432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.650 [2024-07-20 19:04:04.962457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.650 [2024-07-20 19:04:04.962472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.650 [2024-07-20 19:04:04.962491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.650 [2024-07-20 19:04:04.962520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.650 qpair failed and we were unable to recover it. 00:33:54.908 [2024-07-20 19:04:04.972238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.908 [2024-07-20 19:04:04.972492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.908 [2024-07-20 19:04:04.972519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.908 [2024-07-20 19:04:04.972535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.908 [2024-07-20 19:04:04.972548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.908 [2024-07-20 19:04:04.972579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.908 qpair failed and we were unable to recover it. 00:33:54.908 [2024-07-20 19:04:04.982266] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.908 [2024-07-20 19:04:04.982475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.908 [2024-07-20 19:04:04.982503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.908 [2024-07-20 19:04:04.982518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.908 [2024-07-20 19:04:04.982531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.908 [2024-07-20 19:04:04.982560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.908 qpair failed and we were unable to recover it. 00:33:54.908 [2024-07-20 19:04:04.992260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.908 [2024-07-20 19:04:04.992518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.908 [2024-07-20 19:04:04.992544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.908 [2024-07-20 19:04:04.992559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.908 [2024-07-20 19:04:04.992572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.908 [2024-07-20 19:04:04.992601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.908 qpair failed and we were unable to recover it. 00:33:54.908 [2024-07-20 19:04:05.002247] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.908 [2024-07-20 19:04:05.002449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.908 [2024-07-20 19:04:05.002475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.908 [2024-07-20 19:04:05.002490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.908 [2024-07-20 19:04:05.002503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.908 [2024-07-20 19:04:05.002531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.908 qpair failed and we were unable to recover it. 00:33:54.908 [2024-07-20 19:04:05.012322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.908 [2024-07-20 19:04:05.012533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.908 [2024-07-20 19:04:05.012558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.908 [2024-07-20 19:04:05.012573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.908 [2024-07-20 19:04:05.012586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.908 [2024-07-20 19:04:05.012614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.908 qpair failed and we were unable to recover it. 00:33:54.908 [2024-07-20 19:04:05.022368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.908 [2024-07-20 19:04:05.022612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.908 [2024-07-20 19:04:05.022638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.908 [2024-07-20 19:04:05.022653] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.908 [2024-07-20 19:04:05.022666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.908 [2024-07-20 19:04:05.022694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.908 qpair failed and we were unable to recover it. 00:33:54.908 [2024-07-20 19:04:05.032384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.908 [2024-07-20 19:04:05.032594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.908 [2024-07-20 19:04:05.032619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.908 [2024-07-20 19:04:05.032634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.908 [2024-07-20 19:04:05.032647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.908 [2024-07-20 19:04:05.032675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.908 qpair failed and we were unable to recover it. 00:33:54.908 [2024-07-20 19:04:05.042366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.908 [2024-07-20 19:04:05.042570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.908 [2024-07-20 19:04:05.042596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.908 [2024-07-20 19:04:05.042610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.908 [2024-07-20 19:04:05.042624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.908 [2024-07-20 19:04:05.042653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.908 qpair failed and we were unable to recover it. 00:33:54.908 [2024-07-20 19:04:05.052426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.908 [2024-07-20 19:04:05.052636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.908 [2024-07-20 19:04:05.052662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.908 [2024-07-20 19:04:05.052682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.908 [2024-07-20 19:04:05.052697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.908 [2024-07-20 19:04:05.052725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.908 qpair failed and we were unable to recover it. 00:33:54.908 [2024-07-20 19:04:05.062465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.908 [2024-07-20 19:04:05.062749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.908 [2024-07-20 19:04:05.062775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.909 [2024-07-20 19:04:05.062789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.909 [2024-07-20 19:04:05.062813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.909 [2024-07-20 19:04:05.062845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.909 qpair failed and we were unable to recover it. 00:33:54.909 [2024-07-20 19:04:05.072489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.909 [2024-07-20 19:04:05.072696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.909 [2024-07-20 19:04:05.072721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.909 [2024-07-20 19:04:05.072735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.909 [2024-07-20 19:04:05.072748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.909 [2024-07-20 19:04:05.072777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.909 qpair failed and we were unable to recover it. 00:33:54.909 [2024-07-20 19:04:05.082510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.909 [2024-07-20 19:04:05.082720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.909 [2024-07-20 19:04:05.082745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.909 [2024-07-20 19:04:05.082759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.909 [2024-07-20 19:04:05.082773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.909 [2024-07-20 19:04:05.082808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.909 qpair failed and we were unable to recover it. 00:33:54.909 [2024-07-20 19:04:05.092548] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.909 [2024-07-20 19:04:05.092802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.909 [2024-07-20 19:04:05.092827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.909 [2024-07-20 19:04:05.092842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.909 [2024-07-20 19:04:05.092855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.909 [2024-07-20 19:04:05.092883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.909 qpair failed and we were unable to recover it. 00:33:54.909 [2024-07-20 19:04:05.102567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.909 [2024-07-20 19:04:05.102804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.909 [2024-07-20 19:04:05.102831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.909 [2024-07-20 19:04:05.102846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.909 [2024-07-20 19:04:05.102859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.909 [2024-07-20 19:04:05.102891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.909 qpair failed and we were unable to recover it. 00:33:54.909 [2024-07-20 19:04:05.112579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.909 [2024-07-20 19:04:05.112789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.909 [2024-07-20 19:04:05.112822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.909 [2024-07-20 19:04:05.112837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.909 [2024-07-20 19:04:05.112849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.909 [2024-07-20 19:04:05.112878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.909 qpair failed and we were unable to recover it. 00:33:54.909 [2024-07-20 19:04:05.122622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.909 [2024-07-20 19:04:05.122837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.909 [2024-07-20 19:04:05.122863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.909 [2024-07-20 19:04:05.122878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.909 [2024-07-20 19:04:05.122891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.909 [2024-07-20 19:04:05.122922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.909 qpair failed and we were unable to recover it. 00:33:54.909 [2024-07-20 19:04:05.132649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.909 [2024-07-20 19:04:05.132879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.909 [2024-07-20 19:04:05.132904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.909 [2024-07-20 19:04:05.132918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.909 [2024-07-20 19:04:05.132932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.909 [2024-07-20 19:04:05.132960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.909 qpair failed and we were unable to recover it. 00:33:54.909 [2024-07-20 19:04:05.142701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.909 [2024-07-20 19:04:05.142915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.909 [2024-07-20 19:04:05.142941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.909 [2024-07-20 19:04:05.142961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.909 [2024-07-20 19:04:05.142975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.909 [2024-07-20 19:04:05.143004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.909 qpair failed and we were unable to recover it. 00:33:54.909 [2024-07-20 19:04:05.152708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.909 [2024-07-20 19:04:05.152916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.909 [2024-07-20 19:04:05.152942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.909 [2024-07-20 19:04:05.152956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.909 [2024-07-20 19:04:05.152969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.909 [2024-07-20 19:04:05.152999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.909 qpair failed and we were unable to recover it. 00:33:54.909 [2024-07-20 19:04:05.162724] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.909 [2024-07-20 19:04:05.162933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.909 [2024-07-20 19:04:05.162959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.909 [2024-07-20 19:04:05.162974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.909 [2024-07-20 19:04:05.162987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.909 [2024-07-20 19:04:05.163015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.909 qpair failed and we were unable to recover it. 00:33:54.909 [2024-07-20 19:04:05.172814] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.909 [2024-07-20 19:04:05.173029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.909 [2024-07-20 19:04:05.173055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.910 [2024-07-20 19:04:05.173070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.910 [2024-07-20 19:04:05.173083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.910 [2024-07-20 19:04:05.173114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.910 qpair failed and we were unable to recover it. 00:33:54.910 [2024-07-20 19:04:05.182809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.910 [2024-07-20 19:04:05.183044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.910 [2024-07-20 19:04:05.183070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.910 [2024-07-20 19:04:05.183085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.910 [2024-07-20 19:04:05.183098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.910 [2024-07-20 19:04:05.183127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.910 qpair failed and we were unable to recover it. 00:33:54.910 [2024-07-20 19:04:05.192856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.910 [2024-07-20 19:04:05.193076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.910 [2024-07-20 19:04:05.193102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.910 [2024-07-20 19:04:05.193116] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.910 [2024-07-20 19:04:05.193129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.910 [2024-07-20 19:04:05.193158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.910 qpair failed and we were unable to recover it. 00:33:54.910 [2024-07-20 19:04:05.202863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.910 [2024-07-20 19:04:05.203097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.910 [2024-07-20 19:04:05.203123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.910 [2024-07-20 19:04:05.203138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.910 [2024-07-20 19:04:05.203151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.910 [2024-07-20 19:04:05.203182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.910 qpair failed and we were unable to recover it. 00:33:54.910 [2024-07-20 19:04:05.212873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.910 [2024-07-20 19:04:05.213084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.910 [2024-07-20 19:04:05.213110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.910 [2024-07-20 19:04:05.213125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.910 [2024-07-20 19:04:05.213138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.910 [2024-07-20 19:04:05.213166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.910 qpair failed and we were unable to recover it. 00:33:54.910 [2024-07-20 19:04:05.222888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.910 [2024-07-20 19:04:05.223101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.910 [2024-07-20 19:04:05.223127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.910 [2024-07-20 19:04:05.223143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.910 [2024-07-20 19:04:05.223156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:54.910 [2024-07-20 19:04:05.223187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:54.910 qpair failed and we were unable to recover it. 00:33:55.168 [2024-07-20 19:04:05.232921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.168 [2024-07-20 19:04:05.233136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.168 [2024-07-20 19:04:05.233164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.168 [2024-07-20 19:04:05.233185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.168 [2024-07-20 19:04:05.233199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.168 [2024-07-20 19:04:05.233230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.168 qpair failed and we were unable to recover it. 00:33:55.168 [2024-07-20 19:04:05.242950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.168 [2024-07-20 19:04:05.243166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.168 [2024-07-20 19:04:05.243195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.168 [2024-07-20 19:04:05.243210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.168 [2024-07-20 19:04:05.243223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.168 [2024-07-20 19:04:05.243253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.168 qpair failed and we were unable to recover it. 00:33:55.168 [2024-07-20 19:04:05.252976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.168 [2024-07-20 19:04:05.253186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.168 [2024-07-20 19:04:05.253213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.168 [2024-07-20 19:04:05.253227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.168 [2024-07-20 19:04:05.253241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.168 [2024-07-20 19:04:05.253270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.168 qpair failed and we were unable to recover it. 00:33:55.168 [2024-07-20 19:04:05.262996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.168 [2024-07-20 19:04:05.263211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.168 [2024-07-20 19:04:05.263236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.168 [2024-07-20 19:04:05.263251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.168 [2024-07-20 19:04:05.263264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.168 [2024-07-20 19:04:05.263293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.168 qpair failed and we were unable to recover it. 00:33:55.168 [2024-07-20 19:04:05.273034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.168 [2024-07-20 19:04:05.273236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.168 [2024-07-20 19:04:05.273261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.168 [2024-07-20 19:04:05.273276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.168 [2024-07-20 19:04:05.273289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.168 [2024-07-20 19:04:05.273317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.168 qpair failed and we were unable to recover it. 00:33:55.168 [2024-07-20 19:04:05.283056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.168 [2024-07-20 19:04:05.283264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.168 [2024-07-20 19:04:05.283290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.168 [2024-07-20 19:04:05.283304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.168 [2024-07-20 19:04:05.283317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.168 [2024-07-20 19:04:05.283346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.168 qpair failed and we were unable to recover it. 00:33:55.168 [2024-07-20 19:04:05.293161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.168 [2024-07-20 19:04:05.293472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.168 [2024-07-20 19:04:05.293498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.168 [2024-07-20 19:04:05.293513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.168 [2024-07-20 19:04:05.293526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.168 [2024-07-20 19:04:05.293554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.168 qpair failed and we were unable to recover it. 00:33:55.168 [2024-07-20 19:04:05.303251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.168 [2024-07-20 19:04:05.303474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.168 [2024-07-20 19:04:05.303501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.168 [2024-07-20 19:04:05.303515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.168 [2024-07-20 19:04:05.303528] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.168 [2024-07-20 19:04:05.303557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.168 qpair failed and we were unable to recover it. 00:33:55.168 [2024-07-20 19:04:05.313200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.168 [2024-07-20 19:04:05.313411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.168 [2024-07-20 19:04:05.313437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.168 [2024-07-20 19:04:05.313451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.168 [2024-07-20 19:04:05.313464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.168 [2024-07-20 19:04:05.313492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.168 qpair failed and we were unable to recover it. 00:33:55.168 [2024-07-20 19:04:05.323205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.168 [2024-07-20 19:04:05.323426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.168 [2024-07-20 19:04:05.323459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.168 [2024-07-20 19:04:05.323478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.168 [2024-07-20 19:04:05.323492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.168 [2024-07-20 19:04:05.323521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.168 qpair failed and we were unable to recover it. 00:33:55.168 [2024-07-20 19:04:05.333219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.168 [2024-07-20 19:04:05.333471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.168 [2024-07-20 19:04:05.333496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.168 [2024-07-20 19:04:05.333511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.169 [2024-07-20 19:04:05.333524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.169 [2024-07-20 19:04:05.333553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.169 qpair failed and we were unable to recover it. 00:33:55.169 [2024-07-20 19:04:05.343277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.169 [2024-07-20 19:04:05.343487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.169 [2024-07-20 19:04:05.343513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.169 [2024-07-20 19:04:05.343528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.169 [2024-07-20 19:04:05.343542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.169 [2024-07-20 19:04:05.343571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.169 qpair failed and we were unable to recover it. 00:33:55.169 [2024-07-20 19:04:05.353307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.169 [2024-07-20 19:04:05.353559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.169 [2024-07-20 19:04:05.353584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.169 [2024-07-20 19:04:05.353599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.169 [2024-07-20 19:04:05.353612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.169 [2024-07-20 19:04:05.353640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.169 qpair failed and we were unable to recover it. 00:33:55.169 [2024-07-20 19:04:05.363319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.169 [2024-07-20 19:04:05.363559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.169 [2024-07-20 19:04:05.363584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.169 [2024-07-20 19:04:05.363599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.169 [2024-07-20 19:04:05.363613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.169 [2024-07-20 19:04:05.363645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.169 qpair failed and we were unable to recover it. 00:33:55.169 [2024-07-20 19:04:05.373360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.169 [2024-07-20 19:04:05.373571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.169 [2024-07-20 19:04:05.373597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.169 [2024-07-20 19:04:05.373612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.169 [2024-07-20 19:04:05.373625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.169 [2024-07-20 19:04:05.373653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.169 qpair failed and we were unable to recover it. 00:33:55.169 [2024-07-20 19:04:05.383374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.169 [2024-07-20 19:04:05.383584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.169 [2024-07-20 19:04:05.383611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.169 [2024-07-20 19:04:05.383625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.169 [2024-07-20 19:04:05.383638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.169 [2024-07-20 19:04:05.383669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.169 qpair failed and we were unable to recover it. 00:33:55.169 [2024-07-20 19:04:05.393372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.169 [2024-07-20 19:04:05.393580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.169 [2024-07-20 19:04:05.393606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.169 [2024-07-20 19:04:05.393621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.169 [2024-07-20 19:04:05.393634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.169 [2024-07-20 19:04:05.393662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.169 qpair failed and we were unable to recover it. 00:33:55.169 [2024-07-20 19:04:05.403386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.169 [2024-07-20 19:04:05.403587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.169 [2024-07-20 19:04:05.403613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.169 [2024-07-20 19:04:05.403627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.169 [2024-07-20 19:04:05.403641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.169 [2024-07-20 19:04:05.403669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.169 qpair failed and we were unable to recover it. 00:33:55.169 [2024-07-20 19:04:05.413487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.169 [2024-07-20 19:04:05.413709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.169 [2024-07-20 19:04:05.413742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.169 [2024-07-20 19:04:05.413760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.169 [2024-07-20 19:04:05.413773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.169 [2024-07-20 19:04:05.413817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.169 qpair failed and we were unable to recover it. 00:33:55.169 [2024-07-20 19:04:05.423487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.169 [2024-07-20 19:04:05.423759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.169 [2024-07-20 19:04:05.423784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.169 [2024-07-20 19:04:05.423812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.169 [2024-07-20 19:04:05.423828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.169 [2024-07-20 19:04:05.423858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.169 qpair failed and we were unable to recover it. 00:33:55.169 [2024-07-20 19:04:05.433530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.169 [2024-07-20 19:04:05.433734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.169 [2024-07-20 19:04:05.433760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.169 [2024-07-20 19:04:05.433775] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.169 [2024-07-20 19:04:05.433788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.169 [2024-07-20 19:04:05.433827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.169 qpair failed and we were unable to recover it. 00:33:55.169 [2024-07-20 19:04:05.443531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.169 [2024-07-20 19:04:05.443742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.169 [2024-07-20 19:04:05.443768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.169 [2024-07-20 19:04:05.443783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.169 [2024-07-20 19:04:05.443807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.169 [2024-07-20 19:04:05.443840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.170 qpair failed and we were unable to recover it. 00:33:55.170 [2024-07-20 19:04:05.453530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.170 [2024-07-20 19:04:05.453744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.170 [2024-07-20 19:04:05.453769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.170 [2024-07-20 19:04:05.453783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.170 [2024-07-20 19:04:05.453804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.170 [2024-07-20 19:04:05.453841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.170 qpair failed and we were unable to recover it. 00:33:55.170 [2024-07-20 19:04:05.463552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.170 [2024-07-20 19:04:05.463770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.170 [2024-07-20 19:04:05.463808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.170 [2024-07-20 19:04:05.463827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.170 [2024-07-20 19:04:05.463841] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.170 [2024-07-20 19:04:05.463872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.170 qpair failed and we were unable to recover it. 00:33:55.170 [2024-07-20 19:04:05.473621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.170 [2024-07-20 19:04:05.473841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.170 [2024-07-20 19:04:05.473867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.170 [2024-07-20 19:04:05.473882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.170 [2024-07-20 19:04:05.473895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.170 [2024-07-20 19:04:05.473923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.170 qpair failed and we were unable to recover it. 00:33:55.170 [2024-07-20 19:04:05.483612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.170 [2024-07-20 19:04:05.483830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.170 [2024-07-20 19:04:05.483856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.170 [2024-07-20 19:04:05.483870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.170 [2024-07-20 19:04:05.483883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.170 [2024-07-20 19:04:05.483912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.170 qpair failed and we were unable to recover it. 00:33:55.428 [2024-07-20 19:04:05.493708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.428 [2024-07-20 19:04:05.493969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.428 [2024-07-20 19:04:05.493997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.428 [2024-07-20 19:04:05.494012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.428 [2024-07-20 19:04:05.494026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.428 [2024-07-20 19:04:05.494056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.428 qpair failed and we were unable to recover it. 00:33:55.428 [2024-07-20 19:04:05.503687] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.428 [2024-07-20 19:04:05.503917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.428 [2024-07-20 19:04:05.503953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.428 [2024-07-20 19:04:05.503969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.428 [2024-07-20 19:04:05.503982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.428 [2024-07-20 19:04:05.504012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.428 qpair failed and we were unable to recover it. 00:33:55.428 [2024-07-20 19:04:05.513723] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.428 [2024-07-20 19:04:05.513954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.428 [2024-07-20 19:04:05.513981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.428 [2024-07-20 19:04:05.513995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.428 [2024-07-20 19:04:05.514008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.428 [2024-07-20 19:04:05.514037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.428 qpair failed and we were unable to recover it. 00:33:55.428 [2024-07-20 19:04:05.523734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.428 [2024-07-20 19:04:05.523946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.428 [2024-07-20 19:04:05.523972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.428 [2024-07-20 19:04:05.523988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.428 [2024-07-20 19:04:05.524001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.428 [2024-07-20 19:04:05.524029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.428 qpair failed and we were unable to recover it. 00:33:55.428 [2024-07-20 19:04:05.533770] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.428 [2024-07-20 19:04:05.534030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.428 [2024-07-20 19:04:05.534057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.428 [2024-07-20 19:04:05.534071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.428 [2024-07-20 19:04:05.534084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.428 [2024-07-20 19:04:05.534113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.428 qpair failed and we were unable to recover it. 00:33:55.428 [2024-07-20 19:04:05.543972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.428 [2024-07-20 19:04:05.544199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.428 [2024-07-20 19:04:05.544225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.428 [2024-07-20 19:04:05.544240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.428 [2024-07-20 19:04:05.544253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.428 [2024-07-20 19:04:05.544287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.428 qpair failed and we were unable to recover it. 00:33:55.428 [2024-07-20 19:04:05.553901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.428 [2024-07-20 19:04:05.554201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.428 [2024-07-20 19:04:05.554227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.428 [2024-07-20 19:04:05.554242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.428 [2024-07-20 19:04:05.554255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.428 [2024-07-20 19:04:05.554283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.428 qpair failed and we were unable to recover it. 00:33:55.428 [2024-07-20 19:04:05.563931] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.428 [2024-07-20 19:04:05.564148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.428 [2024-07-20 19:04:05.564174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.428 [2024-07-20 19:04:05.564188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.428 [2024-07-20 19:04:05.564201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.428 [2024-07-20 19:04:05.564229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.428 qpair failed and we were unable to recover it. 00:33:55.428 [2024-07-20 19:04:05.573927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.428 [2024-07-20 19:04:05.574144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.428 [2024-07-20 19:04:05.574169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.428 [2024-07-20 19:04:05.574184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.428 [2024-07-20 19:04:05.574197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.428 [2024-07-20 19:04:05.574226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.428 qpair failed and we were unable to recover it. 00:33:55.428 [2024-07-20 19:04:05.583953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.428 [2024-07-20 19:04:05.584170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.428 [2024-07-20 19:04:05.584196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.428 [2024-07-20 19:04:05.584211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.428 [2024-07-20 19:04:05.584224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.428 [2024-07-20 19:04:05.584252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.428 qpair failed and we were unable to recover it. 00:33:55.428 [2024-07-20 19:04:05.593940] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.428 [2024-07-20 19:04:05.594148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.428 [2024-07-20 19:04:05.594179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.428 [2024-07-20 19:04:05.594194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.428 [2024-07-20 19:04:05.594207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.429 [2024-07-20 19:04:05.594236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.429 qpair failed and we were unable to recover it. 00:33:55.429 [2024-07-20 19:04:05.603978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.429 [2024-07-20 19:04:05.604189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.429 [2024-07-20 19:04:05.604214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.429 [2024-07-20 19:04:05.604229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.429 [2024-07-20 19:04:05.604242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.429 [2024-07-20 19:04:05.604271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.429 qpair failed and we were unable to recover it. 00:33:55.429 [2024-07-20 19:04:05.614037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.429 [2024-07-20 19:04:05.614254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.429 [2024-07-20 19:04:05.614279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.429 [2024-07-20 19:04:05.614294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.429 [2024-07-20 19:04:05.614307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.429 [2024-07-20 19:04:05.614336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.429 qpair failed and we were unable to recover it. 00:33:55.429 [2024-07-20 19:04:05.624055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.429 [2024-07-20 19:04:05.624282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.429 [2024-07-20 19:04:05.624309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.429 [2024-07-20 19:04:05.624328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.429 [2024-07-20 19:04:05.624342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.429 [2024-07-20 19:04:05.624372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.429 qpair failed and we were unable to recover it. 00:33:55.429 [2024-07-20 19:04:05.634052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.429 [2024-07-20 19:04:05.634262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.429 [2024-07-20 19:04:05.634288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.429 [2024-07-20 19:04:05.634303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.429 [2024-07-20 19:04:05.634317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.429 [2024-07-20 19:04:05.634351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.429 qpair failed and we were unable to recover it. 00:33:55.429 [2024-07-20 19:04:05.644117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.429 [2024-07-20 19:04:05.644376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.429 [2024-07-20 19:04:05.644401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.429 [2024-07-20 19:04:05.644416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.429 [2024-07-20 19:04:05.644429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.429 [2024-07-20 19:04:05.644457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.429 qpair failed and we were unable to recover it. 00:33:55.429 [2024-07-20 19:04:05.654165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.429 [2024-07-20 19:04:05.654411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.429 [2024-07-20 19:04:05.654436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.429 [2024-07-20 19:04:05.654450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.429 [2024-07-20 19:04:05.654464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.429 [2024-07-20 19:04:05.654492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.429 qpair failed and we were unable to recover it. 00:33:55.429 [2024-07-20 19:04:05.664143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.429 [2024-07-20 19:04:05.664354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.429 [2024-07-20 19:04:05.664380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.429 [2024-07-20 19:04:05.664394] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.429 [2024-07-20 19:04:05.664407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.429 [2024-07-20 19:04:05.664436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.429 qpair failed and we were unable to recover it. 00:33:55.429 [2024-07-20 19:04:05.674159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.429 [2024-07-20 19:04:05.674365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.429 [2024-07-20 19:04:05.674391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.429 [2024-07-20 19:04:05.674406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.429 [2024-07-20 19:04:05.674419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.429 [2024-07-20 19:04:05.674447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.429 qpair failed and we were unable to recover it. 00:33:55.429 [2024-07-20 19:04:05.684251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.429 [2024-07-20 19:04:05.684455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.429 [2024-07-20 19:04:05.684485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.429 [2024-07-20 19:04:05.684501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.429 [2024-07-20 19:04:05.684514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.429 [2024-07-20 19:04:05.684543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.429 qpair failed and we were unable to recover it. 00:33:55.429 [2024-07-20 19:04:05.694223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.429 [2024-07-20 19:04:05.694437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.429 [2024-07-20 19:04:05.694462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.429 [2024-07-20 19:04:05.694477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.429 [2024-07-20 19:04:05.694489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.429 [2024-07-20 19:04:05.694518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.429 qpair failed and we were unable to recover it. 00:33:55.429 [2024-07-20 19:04:05.704260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.429 [2024-07-20 19:04:05.704484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.429 [2024-07-20 19:04:05.704510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.429 [2024-07-20 19:04:05.704525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.429 [2024-07-20 19:04:05.704538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.429 [2024-07-20 19:04:05.704566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.429 qpair failed and we were unable to recover it. 00:33:55.429 [2024-07-20 19:04:05.714273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.430 [2024-07-20 19:04:05.714484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.430 [2024-07-20 19:04:05.714510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.430 [2024-07-20 19:04:05.714524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.430 [2024-07-20 19:04:05.714537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.430 [2024-07-20 19:04:05.714566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.430 qpair failed and we were unable to recover it. 00:33:55.430 [2024-07-20 19:04:05.724298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.430 [2024-07-20 19:04:05.724504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.430 [2024-07-20 19:04:05.724530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.430 [2024-07-20 19:04:05.724544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.430 [2024-07-20 19:04:05.724563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.430 [2024-07-20 19:04:05.724592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.430 qpair failed and we were unable to recover it. 00:33:55.430 [2024-07-20 19:04:05.734334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.430 [2024-07-20 19:04:05.734546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.430 [2024-07-20 19:04:05.734571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.430 [2024-07-20 19:04:05.734585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.430 [2024-07-20 19:04:05.734599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.430 [2024-07-20 19:04:05.734627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.430 qpair failed and we were unable to recover it. 00:33:55.430 [2024-07-20 19:04:05.744395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.430 [2024-07-20 19:04:05.744663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.430 [2024-07-20 19:04:05.744690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.430 [2024-07-20 19:04:05.744710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.430 [2024-07-20 19:04:05.744724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.430 [2024-07-20 19:04:05.744754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.430 qpair failed and we were unable to recover it. 00:33:55.688 [2024-07-20 19:04:05.754425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.688 [2024-07-20 19:04:05.754712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.688 [2024-07-20 19:04:05.754740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.689 [2024-07-20 19:04:05.754755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.689 [2024-07-20 19:04:05.754769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.689 [2024-07-20 19:04:05.754815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.689 qpair failed and we were unable to recover it. 00:33:55.689 [2024-07-20 19:04:05.764418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.689 [2024-07-20 19:04:05.764620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.689 [2024-07-20 19:04:05.764647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.689 [2024-07-20 19:04:05.764662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.689 [2024-07-20 19:04:05.764675] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.689 [2024-07-20 19:04:05.764704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.689 qpair failed and we were unable to recover it. 00:33:55.689 [2024-07-20 19:04:05.774451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.689 [2024-07-20 19:04:05.774665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.689 [2024-07-20 19:04:05.774692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.689 [2024-07-20 19:04:05.774706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.689 [2024-07-20 19:04:05.774719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.689 [2024-07-20 19:04:05.774748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.689 qpair failed and we were unable to recover it. 00:33:55.689 [2024-07-20 19:04:05.784491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.689 [2024-07-20 19:04:05.784708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.689 [2024-07-20 19:04:05.784734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.689 [2024-07-20 19:04:05.784749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.689 [2024-07-20 19:04:05.784762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.689 [2024-07-20 19:04:05.784804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.689 qpair failed and we were unable to recover it. 00:33:55.689 [2024-07-20 19:04:05.794544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.689 [2024-07-20 19:04:05.794789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.689 [2024-07-20 19:04:05.794822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.689 [2024-07-20 19:04:05.794836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.689 [2024-07-20 19:04:05.794850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.689 [2024-07-20 19:04:05.794879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.689 qpair failed and we were unable to recover it. 00:33:55.689 [2024-07-20 19:04:05.804526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.689 [2024-07-20 19:04:05.804734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.689 [2024-07-20 19:04:05.804760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.689 [2024-07-20 19:04:05.804774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.689 [2024-07-20 19:04:05.804788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.689 [2024-07-20 19:04:05.804825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.689 qpair failed and we were unable to recover it. 00:33:55.689 [2024-07-20 19:04:05.814583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.689 [2024-07-20 19:04:05.814801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.689 [2024-07-20 19:04:05.814827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.689 [2024-07-20 19:04:05.814841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.689 [2024-07-20 19:04:05.814860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.689 [2024-07-20 19:04:05.814890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.689 qpair failed and we were unable to recover it. 00:33:55.689 [2024-07-20 19:04:05.824611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.689 [2024-07-20 19:04:05.824829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.689 [2024-07-20 19:04:05.824855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.689 [2024-07-20 19:04:05.824869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.689 [2024-07-20 19:04:05.824882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.689 [2024-07-20 19:04:05.824911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.689 qpair failed and we were unable to recover it. 00:33:55.689 [2024-07-20 19:04:05.834614] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.689 [2024-07-20 19:04:05.834821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.689 [2024-07-20 19:04:05.834847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.689 [2024-07-20 19:04:05.834862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.689 [2024-07-20 19:04:05.834875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.689 [2024-07-20 19:04:05.834904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.689 qpair failed and we were unable to recover it. 00:33:55.689 [2024-07-20 19:04:05.844651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.689 [2024-07-20 19:04:05.844878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.689 [2024-07-20 19:04:05.844904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.689 [2024-07-20 19:04:05.844918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.689 [2024-07-20 19:04:05.844932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.689 [2024-07-20 19:04:05.844960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.689 qpair failed and we were unable to recover it. 00:33:55.689 [2024-07-20 19:04:05.854730] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.689 [2024-07-20 19:04:05.854958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.689 [2024-07-20 19:04:05.854983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.689 [2024-07-20 19:04:05.854999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.689 [2024-07-20 19:04:05.855012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.689 [2024-07-20 19:04:05.855041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.689 qpair failed and we were unable to recover it. 00:33:55.689 [2024-07-20 19:04:05.864744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.690 [2024-07-20 19:04:05.864991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.690 [2024-07-20 19:04:05.865018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.690 [2024-07-20 19:04:05.865032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.690 [2024-07-20 19:04:05.865045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.690 [2024-07-20 19:04:05.865075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.690 qpair failed and we were unable to recover it. 00:33:55.690 [2024-07-20 19:04:05.874727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.690 [2024-07-20 19:04:05.874941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.690 [2024-07-20 19:04:05.874967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.690 [2024-07-20 19:04:05.874982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.690 [2024-07-20 19:04:05.874995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.690 [2024-07-20 19:04:05.875024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.690 qpair failed and we were unable to recover it. 00:33:55.690 [2024-07-20 19:04:05.884757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.690 [2024-07-20 19:04:05.884989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.690 [2024-07-20 19:04:05.885015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.690 [2024-07-20 19:04:05.885030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.690 [2024-07-20 19:04:05.885043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.690 [2024-07-20 19:04:05.885071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.690 qpair failed and we were unable to recover it. 00:33:55.690 [2024-07-20 19:04:05.894830] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.690 [2024-07-20 19:04:05.895040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.690 [2024-07-20 19:04:05.895065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.690 [2024-07-20 19:04:05.895079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.690 [2024-07-20 19:04:05.895092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.690 [2024-07-20 19:04:05.895121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.690 qpair failed and we were unable to recover it. 00:33:55.690 [2024-07-20 19:04:05.904856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.690 [2024-07-20 19:04:05.905069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.690 [2024-07-20 19:04:05.905094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.690 [2024-07-20 19:04:05.905109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.690 [2024-07-20 19:04:05.905127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.690 [2024-07-20 19:04:05.905157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.690 qpair failed and we were unable to recover it. 00:33:55.690 [2024-07-20 19:04:05.914910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.690 [2024-07-20 19:04:05.915145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.690 [2024-07-20 19:04:05.915171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.690 [2024-07-20 19:04:05.915186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.690 [2024-07-20 19:04:05.915199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.690 [2024-07-20 19:04:05.915228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.690 qpair failed and we were unable to recover it. 00:33:55.690 [2024-07-20 19:04:05.924898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.690 [2024-07-20 19:04:05.925106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.690 [2024-07-20 19:04:05.925130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.690 [2024-07-20 19:04:05.925145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.690 [2024-07-20 19:04:05.925158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.690 [2024-07-20 19:04:05.925186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.690 qpair failed and we were unable to recover it. 00:33:55.690 [2024-07-20 19:04:05.934959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.690 [2024-07-20 19:04:05.935173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.690 [2024-07-20 19:04:05.935198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.690 [2024-07-20 19:04:05.935212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.690 [2024-07-20 19:04:05.935225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.690 [2024-07-20 19:04:05.935254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.690 qpair failed and we were unable to recover it. 00:33:55.690 [2024-07-20 19:04:05.944946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.690 [2024-07-20 19:04:05.945198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.690 [2024-07-20 19:04:05.945223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.690 [2024-07-20 19:04:05.945237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.690 [2024-07-20 19:04:05.945250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.690 [2024-07-20 19:04:05.945279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.690 qpair failed and we were unable to recover it. 00:33:55.690 [2024-07-20 19:04:05.954980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.690 [2024-07-20 19:04:05.955196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.690 [2024-07-20 19:04:05.955221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.690 [2024-07-20 19:04:05.955235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.690 [2024-07-20 19:04:05.955249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.690 [2024-07-20 19:04:05.955277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.690 qpair failed and we were unable to recover it. 00:33:55.690 [2024-07-20 19:04:05.965018] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.690 [2024-07-20 19:04:05.965269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.690 [2024-07-20 19:04:05.965295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.690 [2024-07-20 19:04:05.965310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.690 [2024-07-20 19:04:05.965323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.690 [2024-07-20 19:04:05.965351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.690 qpair failed and we were unable to recover it. 00:33:55.690 [2024-07-20 19:04:05.975050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.691 [2024-07-20 19:04:05.975271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.691 [2024-07-20 19:04:05.975297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.691 [2024-07-20 19:04:05.975311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.691 [2024-07-20 19:04:05.975324] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.691 [2024-07-20 19:04:05.975352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.691 qpair failed and we were unable to recover it. 00:33:55.691 [2024-07-20 19:04:05.985057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.691 [2024-07-20 19:04:05.985268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.691 [2024-07-20 19:04:05.985294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.691 [2024-07-20 19:04:05.985308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.691 [2024-07-20 19:04:05.985321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.691 [2024-07-20 19:04:05.985350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.691 qpair failed and we were unable to recover it. 00:33:55.691 [2024-07-20 19:04:05.995111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.691 [2024-07-20 19:04:05.995322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.691 [2024-07-20 19:04:05.995347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.691 [2024-07-20 19:04:05.995368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.691 [2024-07-20 19:04:05.995382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.691 [2024-07-20 19:04:05.995412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.691 qpair failed and we were unable to recover it. 00:33:55.691 [2024-07-20 19:04:06.005138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.691 [2024-07-20 19:04:06.005344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.691 [2024-07-20 19:04:06.005369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.691 [2024-07-20 19:04:06.005384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.691 [2024-07-20 19:04:06.005397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.691 [2024-07-20 19:04:06.005425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.691 qpair failed and we were unable to recover it. 00:33:55.950 [2024-07-20 19:04:06.015193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.950 [2024-07-20 19:04:06.015484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.950 [2024-07-20 19:04:06.015512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.950 [2024-07-20 19:04:06.015527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.950 [2024-07-20 19:04:06.015540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.950 [2024-07-20 19:04:06.015569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.950 qpair failed and we were unable to recover it. 00:33:55.950 [2024-07-20 19:04:06.025209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.950 [2024-07-20 19:04:06.025430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.950 [2024-07-20 19:04:06.025457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.950 [2024-07-20 19:04:06.025472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.950 [2024-07-20 19:04:06.025486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.950 [2024-07-20 19:04:06.025515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.950 qpair failed and we were unable to recover it. 00:33:55.950 [2024-07-20 19:04:06.035254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.950 [2024-07-20 19:04:06.035470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.950 [2024-07-20 19:04:06.035496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.950 [2024-07-20 19:04:06.035511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.950 [2024-07-20 19:04:06.035524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.950 [2024-07-20 19:04:06.035553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.950 qpair failed and we were unable to recover it. 00:33:55.950 [2024-07-20 19:04:06.045225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.950 [2024-07-20 19:04:06.045427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.950 [2024-07-20 19:04:06.045454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.950 [2024-07-20 19:04:06.045468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.950 [2024-07-20 19:04:06.045481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.950 [2024-07-20 19:04:06.045510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.950 qpair failed and we were unable to recover it. 00:33:55.950 [2024-07-20 19:04:06.055269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.950 [2024-07-20 19:04:06.055492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.950 [2024-07-20 19:04:06.055518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.950 [2024-07-20 19:04:06.055532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.950 [2024-07-20 19:04:06.055545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.950 [2024-07-20 19:04:06.055574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.950 qpair failed and we were unable to recover it. 00:33:55.950 [2024-07-20 19:04:06.065324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.950 [2024-07-20 19:04:06.065535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.950 [2024-07-20 19:04:06.065561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.950 [2024-07-20 19:04:06.065575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.950 [2024-07-20 19:04:06.065588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.950 [2024-07-20 19:04:06.065617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.950 qpair failed and we were unable to recover it. 00:33:55.950 [2024-07-20 19:04:06.075327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.950 [2024-07-20 19:04:06.075541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.950 [2024-07-20 19:04:06.075567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.951 [2024-07-20 19:04:06.075581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.951 [2024-07-20 19:04:06.075594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.951 [2024-07-20 19:04:06.075622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.951 qpair failed and we were unable to recover it. 00:33:55.951 [2024-07-20 19:04:06.085392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.951 [2024-07-20 19:04:06.085651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.951 [2024-07-20 19:04:06.085676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.951 [2024-07-20 19:04:06.085697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.951 [2024-07-20 19:04:06.085711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.951 [2024-07-20 19:04:06.085740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.951 qpair failed and we were unable to recover it. 00:33:55.951 [2024-07-20 19:04:06.095506] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.951 [2024-07-20 19:04:06.095728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.951 [2024-07-20 19:04:06.095753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.951 [2024-07-20 19:04:06.095768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.951 [2024-07-20 19:04:06.095781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.951 [2024-07-20 19:04:06.095817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.951 qpair failed and we were unable to recover it. 00:33:55.951 [2024-07-20 19:04:06.105437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.951 [2024-07-20 19:04:06.105689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.951 [2024-07-20 19:04:06.105720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.951 [2024-07-20 19:04:06.105736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.951 [2024-07-20 19:04:06.105749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.951 [2024-07-20 19:04:06.105779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.951 qpair failed and we were unable to recover it. 00:33:55.951 [2024-07-20 19:04:06.115463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.951 [2024-07-20 19:04:06.115711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.951 [2024-07-20 19:04:06.115739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.951 [2024-07-20 19:04:06.115754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.951 [2024-07-20 19:04:06.115767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.951 [2024-07-20 19:04:06.115803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.951 qpair failed and we were unable to recover it. 00:33:55.951 [2024-07-20 19:04:06.125569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.951 [2024-07-20 19:04:06.125782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.951 [2024-07-20 19:04:06.125817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.951 [2024-07-20 19:04:06.125832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.951 [2024-07-20 19:04:06.125846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.951 [2024-07-20 19:04:06.125877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.951 qpair failed and we were unable to recover it. 00:33:55.951 [2024-07-20 19:04:06.135596] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.951 [2024-07-20 19:04:06.135814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.951 [2024-07-20 19:04:06.135841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.951 [2024-07-20 19:04:06.135856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.951 [2024-07-20 19:04:06.135869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.951 [2024-07-20 19:04:06.135898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.951 qpair failed and we were unable to recover it. 00:33:55.951 [2024-07-20 19:04:06.145572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.951 [2024-07-20 19:04:06.145784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.951 [2024-07-20 19:04:06.145817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.951 [2024-07-20 19:04:06.145832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.951 [2024-07-20 19:04:06.145846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.951 [2024-07-20 19:04:06.145875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.951 qpair failed and we were unable to recover it. 00:33:55.951 [2024-07-20 19:04:06.155576] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.951 [2024-07-20 19:04:06.155810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.951 [2024-07-20 19:04:06.155836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.951 [2024-07-20 19:04:06.155851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.951 [2024-07-20 19:04:06.155865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.951 [2024-07-20 19:04:06.155893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.951 qpair failed and we were unable to recover it. 00:33:55.951 [2024-07-20 19:04:06.165588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.951 [2024-07-20 19:04:06.165791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.951 [2024-07-20 19:04:06.165825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.951 [2024-07-20 19:04:06.165839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.951 [2024-07-20 19:04:06.165853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.951 [2024-07-20 19:04:06.165881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.951 qpair failed and we were unable to recover it. 00:33:55.951 [2024-07-20 19:04:06.175643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.951 [2024-07-20 19:04:06.175862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.951 [2024-07-20 19:04:06.175889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.951 [2024-07-20 19:04:06.175910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.951 [2024-07-20 19:04:06.175923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.951 [2024-07-20 19:04:06.175952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.951 qpair failed and we were unable to recover it. 00:33:55.952 [2024-07-20 19:04:06.185637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.952 [2024-07-20 19:04:06.185856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.952 [2024-07-20 19:04:06.185882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.952 [2024-07-20 19:04:06.185897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.952 [2024-07-20 19:04:06.185910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.952 [2024-07-20 19:04:06.185938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.952 qpair failed and we were unable to recover it. 00:33:55.952 [2024-07-20 19:04:06.195668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.952 [2024-07-20 19:04:06.195897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.952 [2024-07-20 19:04:06.195923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.952 [2024-07-20 19:04:06.195938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.952 [2024-07-20 19:04:06.195951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.952 [2024-07-20 19:04:06.195980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.952 qpair failed and we were unable to recover it. 00:33:55.952 [2024-07-20 19:04:06.205722] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.952 [2024-07-20 19:04:06.205955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.952 [2024-07-20 19:04:06.205982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.952 [2024-07-20 19:04:06.205997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.952 [2024-07-20 19:04:06.206010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.952 [2024-07-20 19:04:06.206041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.952 qpair failed and we were unable to recover it. 00:33:55.952 [2024-07-20 19:04:06.215739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.952 [2024-07-20 19:04:06.216001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.952 [2024-07-20 19:04:06.216027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.952 [2024-07-20 19:04:06.216042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.952 [2024-07-20 19:04:06.216055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.952 [2024-07-20 19:04:06.216084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.952 qpair failed and we were unable to recover it. 00:33:55.952 [2024-07-20 19:04:06.225863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.952 [2024-07-20 19:04:06.226080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.952 [2024-07-20 19:04:06.226106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.952 [2024-07-20 19:04:06.226121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.952 [2024-07-20 19:04:06.226134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.952 [2024-07-20 19:04:06.226163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.952 qpair failed and we were unable to recover it. 00:33:55.952 [2024-07-20 19:04:06.235809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.952 [2024-07-20 19:04:06.236036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.952 [2024-07-20 19:04:06.236061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.952 [2024-07-20 19:04:06.236076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.952 [2024-07-20 19:04:06.236089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.952 [2024-07-20 19:04:06.236117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.952 qpair failed and we were unable to recover it. 00:33:55.952 [2024-07-20 19:04:06.245827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.952 [2024-07-20 19:04:06.246064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.952 [2024-07-20 19:04:06.246090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.952 [2024-07-20 19:04:06.246105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.952 [2024-07-20 19:04:06.246118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.952 [2024-07-20 19:04:06.246146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.952 qpair failed and we were unable to recover it. 00:33:55.952 [2024-07-20 19:04:06.255884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.952 [2024-07-20 19:04:06.256097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.952 [2024-07-20 19:04:06.256123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.952 [2024-07-20 19:04:06.256137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.952 [2024-07-20 19:04:06.256151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.952 [2024-07-20 19:04:06.256179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.952 qpair failed and we were unable to recover it. 00:33:55.952 [2024-07-20 19:04:06.265874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.952 [2024-07-20 19:04:06.266086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.952 [2024-07-20 19:04:06.266112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.952 [2024-07-20 19:04:06.266133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.952 [2024-07-20 19:04:06.266146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:55.952 [2024-07-20 19:04:06.266176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:55.952 qpair failed and we were unable to recover it. 00:33:56.211 [2024-07-20 19:04:06.275912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.211 [2024-07-20 19:04:06.276122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.211 [2024-07-20 19:04:06.276151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.211 [2024-07-20 19:04:06.276166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.211 [2024-07-20 19:04:06.276179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.212 [2024-07-20 19:04:06.276208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.212 qpair failed and we were unable to recover it. 00:33:56.212 [2024-07-20 19:04:06.285931] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.212 [2024-07-20 19:04:06.286140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.212 [2024-07-20 19:04:06.286167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.212 [2024-07-20 19:04:06.286181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.212 [2024-07-20 19:04:06.286195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.212 [2024-07-20 19:04:06.286224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.212 qpair failed and we were unable to recover it. 00:33:56.212 [2024-07-20 19:04:06.295971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.212 [2024-07-20 19:04:06.296185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.212 [2024-07-20 19:04:06.296211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.212 [2024-07-20 19:04:06.296225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.212 [2024-07-20 19:04:06.296238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.212 [2024-07-20 19:04:06.296267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.212 qpair failed and we were unable to recover it. 00:33:56.212 [2024-07-20 19:04:06.306011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.212 [2024-07-20 19:04:06.306229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.212 [2024-07-20 19:04:06.306255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.212 [2024-07-20 19:04:06.306269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.212 [2024-07-20 19:04:06.306283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.212 [2024-07-20 19:04:06.306312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.212 qpair failed and we were unable to recover it. 00:33:56.212 [2024-07-20 19:04:06.316055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.212 [2024-07-20 19:04:06.316268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.212 [2024-07-20 19:04:06.316294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.212 [2024-07-20 19:04:06.316308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.212 [2024-07-20 19:04:06.316321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.212 [2024-07-20 19:04:06.316349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.212 qpair failed and we were unable to recover it. 00:33:56.212 [2024-07-20 19:04:06.326083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.212 [2024-07-20 19:04:06.326293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.212 [2024-07-20 19:04:06.326318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.212 [2024-07-20 19:04:06.326332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.212 [2024-07-20 19:04:06.326345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.212 [2024-07-20 19:04:06.326374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.212 qpair failed and we were unable to recover it. 00:33:56.212 [2024-07-20 19:04:06.336082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.212 [2024-07-20 19:04:06.336299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.212 [2024-07-20 19:04:06.336325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.212 [2024-07-20 19:04:06.336340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.212 [2024-07-20 19:04:06.336353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.212 [2024-07-20 19:04:06.336381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.212 qpair failed and we were unable to recover it. 00:33:56.212 [2024-07-20 19:04:06.346100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.212 [2024-07-20 19:04:06.346326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.212 [2024-07-20 19:04:06.346353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.212 [2024-07-20 19:04:06.346368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.212 [2024-07-20 19:04:06.346382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.212 [2024-07-20 19:04:06.346410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.212 qpair failed and we were unable to recover it. 00:33:56.212 [2024-07-20 19:04:06.356157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.212 [2024-07-20 19:04:06.356396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.212 [2024-07-20 19:04:06.356427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.212 [2024-07-20 19:04:06.356443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.212 [2024-07-20 19:04:06.356456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.212 [2024-07-20 19:04:06.356485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.212 qpair failed and we were unable to recover it. 00:33:56.212 [2024-07-20 19:04:06.366240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.212 [2024-07-20 19:04:06.366477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.212 [2024-07-20 19:04:06.366503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.212 [2024-07-20 19:04:06.366517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.212 [2024-07-20 19:04:06.366532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.212 [2024-07-20 19:04:06.366561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.212 qpair failed and we were unable to recover it. 00:33:56.212 [2024-07-20 19:04:06.376247] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.212 [2024-07-20 19:04:06.376507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.212 [2024-07-20 19:04:06.376533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.212 [2024-07-20 19:04:06.376547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.212 [2024-07-20 19:04:06.376560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.212 [2024-07-20 19:04:06.376588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.212 qpair failed and we were unable to recover it. 00:33:56.212 [2024-07-20 19:04:06.386224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.212 [2024-07-20 19:04:06.386441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.213 [2024-07-20 19:04:06.386466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.213 [2024-07-20 19:04:06.386481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.213 [2024-07-20 19:04:06.386494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.213 [2024-07-20 19:04:06.386523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.213 qpair failed and we were unable to recover it. 00:33:56.213 [2024-07-20 19:04:06.396286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.213 [2024-07-20 19:04:06.396528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.213 [2024-07-20 19:04:06.396554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.213 [2024-07-20 19:04:06.396568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.213 [2024-07-20 19:04:06.396581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.213 [2024-07-20 19:04:06.396616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.213 qpair failed and we were unable to recover it. 00:33:56.213 [2024-07-20 19:04:06.406309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.213 [2024-07-20 19:04:06.406526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.213 [2024-07-20 19:04:06.406553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.213 [2024-07-20 19:04:06.406567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.213 [2024-07-20 19:04:06.406580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.213 [2024-07-20 19:04:06.406609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.213 qpair failed and we were unable to recover it. 00:33:56.213 [2024-07-20 19:04:06.416313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.213 [2024-07-20 19:04:06.416526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.213 [2024-07-20 19:04:06.416551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.213 [2024-07-20 19:04:06.416566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.213 [2024-07-20 19:04:06.416579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.213 [2024-07-20 19:04:06.416608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.213 qpair failed and we were unable to recover it. 00:33:56.213 [2024-07-20 19:04:06.426327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.213 [2024-07-20 19:04:06.426541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.213 [2024-07-20 19:04:06.426568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.213 [2024-07-20 19:04:06.426583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.213 [2024-07-20 19:04:06.426596] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.213 [2024-07-20 19:04:06.426625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.213 qpair failed and we were unable to recover it. 00:33:56.213 [2024-07-20 19:04:06.436377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.213 [2024-07-20 19:04:06.436586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.213 [2024-07-20 19:04:06.436612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.213 [2024-07-20 19:04:06.436627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.213 [2024-07-20 19:04:06.436640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.213 [2024-07-20 19:04:06.436667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.213 qpair failed and we were unable to recover it. 00:33:56.213 [2024-07-20 19:04:06.446444] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.213 [2024-07-20 19:04:06.446653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.213 [2024-07-20 19:04:06.446683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.213 [2024-07-20 19:04:06.446698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.213 [2024-07-20 19:04:06.446711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.213 [2024-07-20 19:04:06.446739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.213 qpair failed and we were unable to recover it. 00:33:56.213 [2024-07-20 19:04:06.456436] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.213 [2024-07-20 19:04:06.456651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.213 [2024-07-20 19:04:06.456676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.213 [2024-07-20 19:04:06.456690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.213 [2024-07-20 19:04:06.456704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.213 [2024-07-20 19:04:06.456732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.213 qpair failed and we were unable to recover it. 00:33:56.213 [2024-07-20 19:04:06.466465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.213 [2024-07-20 19:04:06.466724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.213 [2024-07-20 19:04:06.466749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.213 [2024-07-20 19:04:06.466764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.213 [2024-07-20 19:04:06.466777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.213 [2024-07-20 19:04:06.466814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.213 qpair failed and we were unable to recover it. 00:33:56.213 [2024-07-20 19:04:06.476467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.213 [2024-07-20 19:04:06.476671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.213 [2024-07-20 19:04:06.476696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.213 [2024-07-20 19:04:06.476711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.213 [2024-07-20 19:04:06.476724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.213 [2024-07-20 19:04:06.476753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.213 qpair failed and we were unable to recover it. 00:33:56.213 [2024-07-20 19:04:06.486498] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.213 [2024-07-20 19:04:06.486709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.213 [2024-07-20 19:04:06.486734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.214 [2024-07-20 19:04:06.486748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.214 [2024-07-20 19:04:06.486762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.214 [2024-07-20 19:04:06.486804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.214 qpair failed and we were unable to recover it. 00:33:56.214 [2024-07-20 19:04:06.496565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.214 [2024-07-20 19:04:06.496778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.214 [2024-07-20 19:04:06.496811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.214 [2024-07-20 19:04:06.496826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.214 [2024-07-20 19:04:06.496839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.214 [2024-07-20 19:04:06.496868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.214 qpair failed and we were unable to recover it. 00:33:56.214 [2024-07-20 19:04:06.506560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.214 [2024-07-20 19:04:06.506776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.214 [2024-07-20 19:04:06.506811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.214 [2024-07-20 19:04:06.506827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.214 [2024-07-20 19:04:06.506840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.214 [2024-07-20 19:04:06.506869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.214 qpair failed and we were unable to recover it. 00:33:56.214 [2024-07-20 19:04:06.516573] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.214 [2024-07-20 19:04:06.516802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.214 [2024-07-20 19:04:06.516828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.214 [2024-07-20 19:04:06.516843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.214 [2024-07-20 19:04:06.516856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.214 [2024-07-20 19:04:06.516884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.214 qpair failed and we were unable to recover it. 00:33:56.214 [2024-07-20 19:04:06.526658] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.214 [2024-07-20 19:04:06.526880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.214 [2024-07-20 19:04:06.526905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.214 [2024-07-20 19:04:06.526920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.214 [2024-07-20 19:04:06.526933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.214 [2024-07-20 19:04:06.526961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.214 qpair failed and we were unable to recover it. 00:33:56.473 [2024-07-20 19:04:06.536648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.473 [2024-07-20 19:04:06.536865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.473 [2024-07-20 19:04:06.536898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.473 [2024-07-20 19:04:06.536914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.473 [2024-07-20 19:04:06.536927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.473 [2024-07-20 19:04:06.536956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.473 qpair failed and we were unable to recover it. 00:33:56.473 [2024-07-20 19:04:06.546678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.473 [2024-07-20 19:04:06.546892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.473 [2024-07-20 19:04:06.546920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.473 [2024-07-20 19:04:06.546936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.473 [2024-07-20 19:04:06.546949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.473 [2024-07-20 19:04:06.546978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.473 qpair failed and we were unable to recover it. 00:33:56.473 [2024-07-20 19:04:06.556742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.473 [2024-07-20 19:04:06.556952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.473 [2024-07-20 19:04:06.556979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.473 [2024-07-20 19:04:06.556993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.473 [2024-07-20 19:04:06.557007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.473 [2024-07-20 19:04:06.557036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.473 qpair failed and we were unable to recover it. 00:33:56.473 [2024-07-20 19:04:06.566746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.473 [2024-07-20 19:04:06.567000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.473 [2024-07-20 19:04:06.567026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.473 [2024-07-20 19:04:06.567041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.473 [2024-07-20 19:04:06.567054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.473 [2024-07-20 19:04:06.567083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.473 qpair failed and we were unable to recover it. 00:33:56.473 [2024-07-20 19:04:06.576770] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.473 [2024-07-20 19:04:06.576999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.473 [2024-07-20 19:04:06.577025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.473 [2024-07-20 19:04:06.577040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.473 [2024-07-20 19:04:06.577053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.473 [2024-07-20 19:04:06.577087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.473 qpair failed and we were unable to recover it. 00:33:56.473 [2024-07-20 19:04:06.586768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.473 [2024-07-20 19:04:06.586992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.473 [2024-07-20 19:04:06.587018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.473 [2024-07-20 19:04:06.587032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.473 [2024-07-20 19:04:06.587046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.473 [2024-07-20 19:04:06.587075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.473 qpair failed and we were unable to recover it. 00:33:56.473 [2024-07-20 19:04:06.596838] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.473 [2024-07-20 19:04:06.597048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.473 [2024-07-20 19:04:06.597074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.473 [2024-07-20 19:04:06.597088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.473 [2024-07-20 19:04:06.597102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.473 [2024-07-20 19:04:06.597130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.473 qpair failed and we were unable to recover it. 00:33:56.473 [2024-07-20 19:04:06.606850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.473 [2024-07-20 19:04:06.607065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.473 [2024-07-20 19:04:06.607091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.473 [2024-07-20 19:04:06.607105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.473 [2024-07-20 19:04:06.607118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.473 [2024-07-20 19:04:06.607147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.473 qpair failed and we were unable to recover it. 00:33:56.473 [2024-07-20 19:04:06.616943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.473 [2024-07-20 19:04:06.617175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.473 [2024-07-20 19:04:06.617200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.473 [2024-07-20 19:04:06.617214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.473 [2024-07-20 19:04:06.617227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.473 [2024-07-20 19:04:06.617256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.473 qpair failed and we were unable to recover it. 00:33:56.473 [2024-07-20 19:04:06.626898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.473 [2024-07-20 19:04:06.627114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.473 [2024-07-20 19:04:06.627144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.473 [2024-07-20 19:04:06.627160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.473 [2024-07-20 19:04:06.627173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.473 [2024-07-20 19:04:06.627202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.473 qpair failed and we were unable to recover it. 00:33:56.473 [2024-07-20 19:04:06.636959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.473 [2024-07-20 19:04:06.637171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.473 [2024-07-20 19:04:06.637197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.473 [2024-07-20 19:04:06.637211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.473 [2024-07-20 19:04:06.637224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.473 [2024-07-20 19:04:06.637252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.473 qpair failed and we were unable to recover it. 00:33:56.473 [2024-07-20 19:04:06.646944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.473 [2024-07-20 19:04:06.647148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.473 [2024-07-20 19:04:06.647173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.474 [2024-07-20 19:04:06.647187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.474 [2024-07-20 19:04:06.647200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.474 [2024-07-20 19:04:06.647229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.474 qpair failed and we were unable to recover it. 00:33:56.474 [2024-07-20 19:04:06.657001] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.474 [2024-07-20 19:04:06.657218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.474 [2024-07-20 19:04:06.657243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.474 [2024-07-20 19:04:06.657257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.474 [2024-07-20 19:04:06.657270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.474 [2024-07-20 19:04:06.657298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.474 qpair failed and we were unable to recover it. 00:33:56.474 [2024-07-20 19:04:06.667040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.474 [2024-07-20 19:04:06.667249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.474 [2024-07-20 19:04:06.667275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.474 [2024-07-20 19:04:06.667290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.474 [2024-07-20 19:04:06.667309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.474 [2024-07-20 19:04:06.667338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.474 qpair failed and we were unable to recover it. 00:33:56.474 [2024-07-20 19:04:06.677112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.474 [2024-07-20 19:04:06.677324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.474 [2024-07-20 19:04:06.677351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.474 [2024-07-20 19:04:06.677370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.474 [2024-07-20 19:04:06.677384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.474 [2024-07-20 19:04:06.677414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.474 qpair failed and we were unable to recover it. 00:33:56.474 [2024-07-20 19:04:06.687074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.474 [2024-07-20 19:04:06.687275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.474 [2024-07-20 19:04:06.687301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.474 [2024-07-20 19:04:06.687315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.474 [2024-07-20 19:04:06.687329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.474 [2024-07-20 19:04:06.687358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.474 qpair failed and we were unable to recover it. 00:33:56.474 [2024-07-20 19:04:06.697112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.474 [2024-07-20 19:04:06.697329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.474 [2024-07-20 19:04:06.697354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.474 [2024-07-20 19:04:06.697368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.474 [2024-07-20 19:04:06.697381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.474 [2024-07-20 19:04:06.697409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.474 qpair failed and we were unable to recover it. 00:33:56.474 [2024-07-20 19:04:06.707172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.474 [2024-07-20 19:04:06.707384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.474 [2024-07-20 19:04:06.707410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.474 [2024-07-20 19:04:06.707425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.474 [2024-07-20 19:04:06.707438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.474 [2024-07-20 19:04:06.707467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.474 qpair failed and we were unable to recover it. 00:33:56.474 [2024-07-20 19:04:06.717182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.474 [2024-07-20 19:04:06.717468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.474 [2024-07-20 19:04:06.717493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.474 [2024-07-20 19:04:06.717507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.474 [2024-07-20 19:04:06.717521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.474 [2024-07-20 19:04:06.717549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.474 qpair failed and we were unable to recover it. 00:33:56.474 [2024-07-20 19:04:06.727170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.474 [2024-07-20 19:04:06.727372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.474 [2024-07-20 19:04:06.727398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.474 [2024-07-20 19:04:06.727412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.474 [2024-07-20 19:04:06.727425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.474 [2024-07-20 19:04:06.727453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.474 qpair failed and we were unable to recover it. 00:33:56.474 [2024-07-20 19:04:06.737244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.474 [2024-07-20 19:04:06.737487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.474 [2024-07-20 19:04:06.737512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.474 [2024-07-20 19:04:06.737526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.474 [2024-07-20 19:04:06.737539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.474 [2024-07-20 19:04:06.737567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.474 qpair failed and we were unable to recover it. 00:33:56.474 [2024-07-20 19:04:06.747300] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.474 [2024-07-20 19:04:06.747585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.474 [2024-07-20 19:04:06.747611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.474 [2024-07-20 19:04:06.747626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.474 [2024-07-20 19:04:06.747639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.474 [2024-07-20 19:04:06.747667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.474 qpair failed and we were unable to recover it. 00:33:56.474 [2024-07-20 19:04:06.757301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.474 [2024-07-20 19:04:06.757526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.474 [2024-07-20 19:04:06.757552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.474 [2024-07-20 19:04:06.757566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.474 [2024-07-20 19:04:06.757584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.474 [2024-07-20 19:04:06.757613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.474 qpair failed and we were unable to recover it. 00:33:56.474 [2024-07-20 19:04:06.767339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.474 [2024-07-20 19:04:06.767543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.474 [2024-07-20 19:04:06.767568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.474 [2024-07-20 19:04:06.767582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.474 [2024-07-20 19:04:06.767595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.474 [2024-07-20 19:04:06.767623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.474 qpair failed and we were unable to recover it. 00:33:56.474 [2024-07-20 19:04:06.777322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.474 [2024-07-20 19:04:06.777534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.474 [2024-07-20 19:04:06.777559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.474 [2024-07-20 19:04:06.777573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.474 [2024-07-20 19:04:06.777587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.474 [2024-07-20 19:04:06.777615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.474 qpair failed and we were unable to recover it. 00:33:56.474 [2024-07-20 19:04:06.787376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.474 [2024-07-20 19:04:06.787591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.474 [2024-07-20 19:04:06.787617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.474 [2024-07-20 19:04:06.787631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.474 [2024-07-20 19:04:06.787645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.474 [2024-07-20 19:04:06.787673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.474 qpair failed and we were unable to recover it. 00:33:56.732 [2024-07-20 19:04:06.797394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.732 [2024-07-20 19:04:06.797616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.732 [2024-07-20 19:04:06.797644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.732 [2024-07-20 19:04:06.797659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.733 [2024-07-20 19:04:06.797673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.733 [2024-07-20 19:04:06.797701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.733 qpair failed and we were unable to recover it. 00:33:56.733 [2024-07-20 19:04:06.807401] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.733 [2024-07-20 19:04:06.807614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.733 [2024-07-20 19:04:06.807642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.733 [2024-07-20 19:04:06.807657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.733 [2024-07-20 19:04:06.807671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.733 [2024-07-20 19:04:06.807701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.733 qpair failed and we were unable to recover it. 00:33:56.733 [2024-07-20 19:04:06.817473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.733 [2024-07-20 19:04:06.817723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.733 [2024-07-20 19:04:06.817748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.733 [2024-07-20 19:04:06.817763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.733 [2024-07-20 19:04:06.817777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.733 [2024-07-20 19:04:06.817813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.733 qpair failed and we were unable to recover it. 00:33:56.733 [2024-07-20 19:04:06.827490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.733 [2024-07-20 19:04:06.827698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.733 [2024-07-20 19:04:06.827725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.733 [2024-07-20 19:04:06.827739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.733 [2024-07-20 19:04:06.827752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.733 [2024-07-20 19:04:06.827781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.733 qpair failed and we were unable to recover it. 00:33:56.733 [2024-07-20 19:04:06.837476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.733 [2024-07-20 19:04:06.837681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.733 [2024-07-20 19:04:06.837706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.733 [2024-07-20 19:04:06.837720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.733 [2024-07-20 19:04:06.837734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.733 [2024-07-20 19:04:06.837762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.733 qpair failed and we were unable to recover it. 00:33:56.733 [2024-07-20 19:04:06.847541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.733 [2024-07-20 19:04:06.847749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.733 [2024-07-20 19:04:06.847775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.733 [2024-07-20 19:04:06.847789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.733 [2024-07-20 19:04:06.847817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.733 [2024-07-20 19:04:06.847848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.733 qpair failed and we were unable to recover it. 00:33:56.733 [2024-07-20 19:04:06.857593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.733 [2024-07-20 19:04:06.857865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.733 [2024-07-20 19:04:06.857891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.733 [2024-07-20 19:04:06.857906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.733 [2024-07-20 19:04:06.857919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.733 [2024-07-20 19:04:06.857947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.733 qpair failed and we were unable to recover it. 00:33:56.733 [2024-07-20 19:04:06.867613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.733 [2024-07-20 19:04:06.867827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.733 [2024-07-20 19:04:06.867852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.733 [2024-07-20 19:04:06.867867] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.733 [2024-07-20 19:04:06.867880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.733 [2024-07-20 19:04:06.867909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.733 qpair failed and we were unable to recover it. 00:33:56.733 [2024-07-20 19:04:06.877615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.733 [2024-07-20 19:04:06.877841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.733 [2024-07-20 19:04:06.877867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.733 [2024-07-20 19:04:06.877882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.733 [2024-07-20 19:04:06.877895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.733 [2024-07-20 19:04:06.877923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.733 qpair failed and we were unable to recover it. 00:33:56.733 [2024-07-20 19:04:06.887628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.733 [2024-07-20 19:04:06.887882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.733 [2024-07-20 19:04:06.887908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.733 [2024-07-20 19:04:06.887923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.733 [2024-07-20 19:04:06.887935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.733 [2024-07-20 19:04:06.887964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.733 qpair failed and we were unable to recover it. 00:33:56.733 [2024-07-20 19:04:06.897666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.733 [2024-07-20 19:04:06.897897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.733 [2024-07-20 19:04:06.897922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.733 [2024-07-20 19:04:06.897937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.733 [2024-07-20 19:04:06.897950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.733 [2024-07-20 19:04:06.897979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.733 qpair failed and we were unable to recover it. 00:33:56.733 [2024-07-20 19:04:06.907697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.733 [2024-07-20 19:04:06.907999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.733 [2024-07-20 19:04:06.908027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.733 [2024-07-20 19:04:06.908042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.733 [2024-07-20 19:04:06.908055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.733 [2024-07-20 19:04:06.908085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.733 qpair failed and we were unable to recover it. 00:33:56.733 [2024-07-20 19:04:06.917787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.733 [2024-07-20 19:04:06.918046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.733 [2024-07-20 19:04:06.918072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.733 [2024-07-20 19:04:06.918086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.733 [2024-07-20 19:04:06.918099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.733 [2024-07-20 19:04:06.918129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.733 qpair failed and we were unable to recover it. 00:33:56.733 [2024-07-20 19:04:06.927750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.733 [2024-07-20 19:04:06.928011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.733 [2024-07-20 19:04:06.928037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.733 [2024-07-20 19:04:06.928052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.733 [2024-07-20 19:04:06.928065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.733 [2024-07-20 19:04:06.928094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.733 qpair failed and we were unable to recover it. 00:33:56.733 [2024-07-20 19:04:06.937780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.733 [2024-07-20 19:04:06.938011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.733 [2024-07-20 19:04:06.938037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.733 [2024-07-20 19:04:06.938057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.733 [2024-07-20 19:04:06.938072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.733 [2024-07-20 19:04:06.938100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.733 qpair failed and we were unable to recover it. 00:33:56.733 [2024-07-20 19:04:06.947897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.733 [2024-07-20 19:04:06.948133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.733 [2024-07-20 19:04:06.948159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.734 [2024-07-20 19:04:06.948174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.734 [2024-07-20 19:04:06.948188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.734 [2024-07-20 19:04:06.948219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.734 qpair failed and we were unable to recover it. 00:33:56.734 [2024-07-20 19:04:06.957853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.734 [2024-07-20 19:04:06.958073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.734 [2024-07-20 19:04:06.958099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.734 [2024-07-20 19:04:06.958114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.734 [2024-07-20 19:04:06.958128] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.734 [2024-07-20 19:04:06.958157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.734 qpair failed and we were unable to recover it. 00:33:56.734 [2024-07-20 19:04:06.967959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.734 [2024-07-20 19:04:06.968203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.734 [2024-07-20 19:04:06.968229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.734 [2024-07-20 19:04:06.968244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.734 [2024-07-20 19:04:06.968257] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.734 [2024-07-20 19:04:06.968288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.734 qpair failed and we were unable to recover it. 00:33:56.734 [2024-07-20 19:04:06.978019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.734 [2024-07-20 19:04:06.978234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.734 [2024-07-20 19:04:06.978260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.734 [2024-07-20 19:04:06.978276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.734 [2024-07-20 19:04:06.978289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.734 [2024-07-20 19:04:06.978320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.734 qpair failed and we were unable to recover it. 00:33:56.734 [2024-07-20 19:04:06.987933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.734 [2024-07-20 19:04:06.988148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.734 [2024-07-20 19:04:06.988174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.734 [2024-07-20 19:04:06.988189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.734 [2024-07-20 19:04:06.988202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.734 [2024-07-20 19:04:06.988231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.734 qpair failed and we were unable to recover it. 00:33:56.734 [2024-07-20 19:04:06.997953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.734 [2024-07-20 19:04:06.998168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.734 [2024-07-20 19:04:06.998194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.734 [2024-07-20 19:04:06.998209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.734 [2024-07-20 19:04:06.998222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.734 [2024-07-20 19:04:06.998249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.734 qpair failed and we were unable to recover it. 00:33:56.734 [2024-07-20 19:04:07.007970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.734 [2024-07-20 19:04:07.008175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.734 [2024-07-20 19:04:07.008201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.734 [2024-07-20 19:04:07.008216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.734 [2024-07-20 19:04:07.008229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.734 [2024-07-20 19:04:07.008258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.734 qpair failed and we were unable to recover it. 00:33:56.734 [2024-07-20 19:04:07.018024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.734 [2024-07-20 19:04:07.018280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.734 [2024-07-20 19:04:07.018306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.734 [2024-07-20 19:04:07.018320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.734 [2024-07-20 19:04:07.018333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.734 [2024-07-20 19:04:07.018362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.734 qpair failed and we were unable to recover it. 00:33:56.734 [2024-07-20 19:04:07.028040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.734 [2024-07-20 19:04:07.028245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.734 [2024-07-20 19:04:07.028271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.734 [2024-07-20 19:04:07.028291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.734 [2024-07-20 19:04:07.028305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.734 [2024-07-20 19:04:07.028334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.734 qpair failed and we were unable to recover it. 00:33:56.734 [2024-07-20 19:04:07.038171] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.734 [2024-07-20 19:04:07.038377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.734 [2024-07-20 19:04:07.038404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.734 [2024-07-20 19:04:07.038419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.734 [2024-07-20 19:04:07.038432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.734 [2024-07-20 19:04:07.038461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.734 qpair failed and we were unable to recover it. 00:33:56.734 [2024-07-20 19:04:07.048249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.734 [2024-07-20 19:04:07.048477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.734 [2024-07-20 19:04:07.048503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.734 [2024-07-20 19:04:07.048522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.734 [2024-07-20 19:04:07.048537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.734 [2024-07-20 19:04:07.048566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.734 qpair failed and we were unable to recover it. 00:33:56.993 [2024-07-20 19:04:07.058158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.993 [2024-07-20 19:04:07.058370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.993 [2024-07-20 19:04:07.058398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.993 [2024-07-20 19:04:07.058413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.993 [2024-07-20 19:04:07.058426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.993 [2024-07-20 19:04:07.058456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.993 qpair failed and we were unable to recover it. 00:33:56.993 [2024-07-20 19:04:07.068217] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.993 [2024-07-20 19:04:07.068431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.993 [2024-07-20 19:04:07.068459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.993 [2024-07-20 19:04:07.068474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.993 [2024-07-20 19:04:07.068488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.993 [2024-07-20 19:04:07.068517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.993 qpair failed and we were unable to recover it. 00:33:56.993 [2024-07-20 19:04:07.078193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.993 [2024-07-20 19:04:07.078399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.993 [2024-07-20 19:04:07.078426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.993 [2024-07-20 19:04:07.078440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.993 [2024-07-20 19:04:07.078454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.993 [2024-07-20 19:04:07.078483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.993 qpair failed and we were unable to recover it. 00:33:56.993 [2024-07-20 19:04:07.088224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.993 [2024-07-20 19:04:07.088426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.993 [2024-07-20 19:04:07.088452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.993 [2024-07-20 19:04:07.088467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.993 [2024-07-20 19:04:07.088480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.993 [2024-07-20 19:04:07.088508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.993 qpair failed and we were unable to recover it. 00:33:56.993 [2024-07-20 19:04:07.098267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.993 [2024-07-20 19:04:07.098517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.993 [2024-07-20 19:04:07.098543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.993 [2024-07-20 19:04:07.098557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.993 [2024-07-20 19:04:07.098570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.993 [2024-07-20 19:04:07.098598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.993 qpair failed and we were unable to recover it. 00:33:56.993 [2024-07-20 19:04:07.108317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.993 [2024-07-20 19:04:07.108526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.993 [2024-07-20 19:04:07.108551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.993 [2024-07-20 19:04:07.108566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.993 [2024-07-20 19:04:07.108580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.993 [2024-07-20 19:04:07.108608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.993 qpair failed and we were unable to recover it. 00:33:56.993 [2024-07-20 19:04:07.118338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.993 [2024-07-20 19:04:07.118546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.993 [2024-07-20 19:04:07.118571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.993 [2024-07-20 19:04:07.118593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.993 [2024-07-20 19:04:07.118607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.993 [2024-07-20 19:04:07.118636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.993 qpair failed and we were unable to recover it. 00:33:56.993 [2024-07-20 19:04:07.128334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.993 [2024-07-20 19:04:07.128537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.993 [2024-07-20 19:04:07.128562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.993 [2024-07-20 19:04:07.128577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.993 [2024-07-20 19:04:07.128590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.993 [2024-07-20 19:04:07.128618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.993 qpair failed and we were unable to recover it. 00:33:56.993 [2024-07-20 19:04:07.138382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.993 [2024-07-20 19:04:07.138630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.993 [2024-07-20 19:04:07.138656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.993 [2024-07-20 19:04:07.138671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.993 [2024-07-20 19:04:07.138684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.994 [2024-07-20 19:04:07.138712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-07-20 19:04:07.148490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-07-20 19:04:07.148777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-07-20 19:04:07.148810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-07-20 19:04:07.148825] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-07-20 19:04:07.148838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.994 [2024-07-20 19:04:07.148867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-07-20 19:04:07.158450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-07-20 19:04:07.158658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-07-20 19:04:07.158684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-07-20 19:04:07.158698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-07-20 19:04:07.158711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.994 [2024-07-20 19:04:07.158740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-07-20 19:04:07.168445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-07-20 19:04:07.168652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-07-20 19:04:07.168678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-07-20 19:04:07.168692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-07-20 19:04:07.168705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.994 [2024-07-20 19:04:07.168733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-07-20 19:04:07.178499] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-07-20 19:04:07.178719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-07-20 19:04:07.178744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-07-20 19:04:07.178759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-07-20 19:04:07.178772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.994 [2024-07-20 19:04:07.178808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-07-20 19:04:07.188519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-07-20 19:04:07.188756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-07-20 19:04:07.188781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-07-20 19:04:07.188803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-07-20 19:04:07.188819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.994 [2024-07-20 19:04:07.188850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-07-20 19:04:07.198533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-07-20 19:04:07.198744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-07-20 19:04:07.198769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-07-20 19:04:07.198784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-07-20 19:04:07.198807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.994 [2024-07-20 19:04:07.198837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-07-20 19:04:07.208600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-07-20 19:04:07.208810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-07-20 19:04:07.208836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-07-20 19:04:07.208857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-07-20 19:04:07.208870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.994 [2024-07-20 19:04:07.208899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-07-20 19:04:07.218586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-07-20 19:04:07.218808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-07-20 19:04:07.218833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-07-20 19:04:07.218848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-07-20 19:04:07.218861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.994 [2024-07-20 19:04:07.218890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-07-20 19:04:07.228655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-07-20 19:04:07.228902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-07-20 19:04:07.228927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-07-20 19:04:07.228942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-07-20 19:04:07.228955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.994 [2024-07-20 19:04:07.228984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-07-20 19:04:07.238670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-07-20 19:04:07.238935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-07-20 19:04:07.238961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-07-20 19:04:07.238975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-07-20 19:04:07.238989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.994 [2024-07-20 19:04:07.239018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-07-20 19:04:07.248696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-07-20 19:04:07.248915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-07-20 19:04:07.248940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-07-20 19:04:07.248955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-07-20 19:04:07.248969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.994 [2024-07-20 19:04:07.248997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-07-20 19:04:07.258739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-07-20 19:04:07.258966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-07-20 19:04:07.258991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-07-20 19:04:07.259006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-07-20 19:04:07.259019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.994 [2024-07-20 19:04:07.259047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-07-20 19:04:07.268745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-07-20 19:04:07.269001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-07-20 19:04:07.269027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-07-20 19:04:07.269041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-07-20 19:04:07.269054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.994 [2024-07-20 19:04:07.269082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-07-20 19:04:07.278747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-07-20 19:04:07.278961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-07-20 19:04:07.278986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-07-20 19:04:07.279001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-07-20 19:04:07.279014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.994 [2024-07-20 19:04:07.279043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-07-20 19:04:07.288800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-07-20 19:04:07.289006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-07-20 19:04:07.289031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-07-20 19:04:07.289046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-07-20 19:04:07.289059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.995 [2024-07-20 19:04:07.289087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-07-20 19:04:07.298827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-07-20 19:04:07.299041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-07-20 19:04:07.299071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-07-20 19:04:07.299086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-07-20 19:04:07.299099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.995 [2024-07-20 19:04:07.299128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-07-20 19:04:07.308892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-07-20 19:04:07.309109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-07-20 19:04:07.309135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-07-20 19:04:07.309150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-07-20 19:04:07.309163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:56.995 [2024-07-20 19:04:07.309191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:56.995 qpair failed and we were unable to recover it. 00:33:57.253 [2024-07-20 19:04:07.319005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.253 [2024-07-20 19:04:07.319217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.253 [2024-07-20 19:04:07.319245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.253 [2024-07-20 19:04:07.319260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.253 [2024-07-20 19:04:07.319273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.253 [2024-07-20 19:04:07.319304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.253 qpair failed and we were unable to recover it. 00:33:57.253 [2024-07-20 19:04:07.328937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.253 [2024-07-20 19:04:07.329188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.253 [2024-07-20 19:04:07.329215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.253 [2024-07-20 19:04:07.329229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.253 [2024-07-20 19:04:07.329243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.253 [2024-07-20 19:04:07.329272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.253 qpair failed and we were unable to recover it. 00:33:57.253 [2024-07-20 19:04:07.339039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.253 [2024-07-20 19:04:07.339250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.253 [2024-07-20 19:04:07.339276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.253 [2024-07-20 19:04:07.339291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.253 [2024-07-20 19:04:07.339304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.253 [2024-07-20 19:04:07.339341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.253 qpair failed and we were unable to recover it. 00:33:57.253 [2024-07-20 19:04:07.348965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.253 [2024-07-20 19:04:07.349173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.253 [2024-07-20 19:04:07.349199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.253 [2024-07-20 19:04:07.349214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.253 [2024-07-20 19:04:07.349227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.253 [2024-07-20 19:04:07.349258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.253 qpair failed and we were unable to recover it. 00:33:57.253 [2024-07-20 19:04:07.359024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.253 [2024-07-20 19:04:07.359235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.253 [2024-07-20 19:04:07.359261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.253 [2024-07-20 19:04:07.359275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.253 [2024-07-20 19:04:07.359288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.253 [2024-07-20 19:04:07.359317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.253 qpair failed and we were unable to recover it. 00:33:57.253 [2024-07-20 19:04:07.369071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.253 [2024-07-20 19:04:07.369311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.253 [2024-07-20 19:04:07.369339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.253 [2024-07-20 19:04:07.369356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.253 [2024-07-20 19:04:07.369370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.253 [2024-07-20 19:04:07.369400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.253 qpair failed and we were unable to recover it. 00:33:57.253 [2024-07-20 19:04:07.379071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.253 [2024-07-20 19:04:07.379328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.253 [2024-07-20 19:04:07.379354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.253 [2024-07-20 19:04:07.379368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.253 [2024-07-20 19:04:07.379381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.253 [2024-07-20 19:04:07.379409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.253 qpair failed and we were unable to recover it. 00:33:57.253 [2024-07-20 19:04:07.389081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.253 [2024-07-20 19:04:07.389345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.253 [2024-07-20 19:04:07.389377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.253 [2024-07-20 19:04:07.389393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.253 [2024-07-20 19:04:07.389406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.253 [2024-07-20 19:04:07.389434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.253 qpair failed and we were unable to recover it. 00:33:57.253 [2024-07-20 19:04:07.399210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.253 [2024-07-20 19:04:07.399439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.253 [2024-07-20 19:04:07.399465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.253 [2024-07-20 19:04:07.399480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.253 [2024-07-20 19:04:07.399493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.253 [2024-07-20 19:04:07.399521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.253 qpair failed and we were unable to recover it. 00:33:57.253 [2024-07-20 19:04:07.409181] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.253 [2024-07-20 19:04:07.409431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.253 [2024-07-20 19:04:07.409457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.254 [2024-07-20 19:04:07.409472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.254 [2024-07-20 19:04:07.409485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.254 [2024-07-20 19:04:07.409513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.254 qpair failed and we were unable to recover it. 00:33:57.254 [2024-07-20 19:04:07.419250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.254 [2024-07-20 19:04:07.419470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.254 [2024-07-20 19:04:07.419497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.254 [2024-07-20 19:04:07.419517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.254 [2024-07-20 19:04:07.419531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.254 [2024-07-20 19:04:07.419560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.254 qpair failed and we were unable to recover it. 00:33:57.254 [2024-07-20 19:04:07.429204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.254 [2024-07-20 19:04:07.429423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.254 [2024-07-20 19:04:07.429450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.254 [2024-07-20 19:04:07.429465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.254 [2024-07-20 19:04:07.429478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.254 [2024-07-20 19:04:07.429513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.254 qpair failed and we were unable to recover it. 00:33:57.254 [2024-07-20 19:04:07.439235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.254 [2024-07-20 19:04:07.439468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.254 [2024-07-20 19:04:07.439494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.254 [2024-07-20 19:04:07.439509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.254 [2024-07-20 19:04:07.439522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.254 [2024-07-20 19:04:07.439550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.254 qpair failed and we were unable to recover it. 00:33:57.254 [2024-07-20 19:04:07.449268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.254 [2024-07-20 19:04:07.449482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.254 [2024-07-20 19:04:07.449507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.254 [2024-07-20 19:04:07.449522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.254 [2024-07-20 19:04:07.449535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.254 [2024-07-20 19:04:07.449564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.254 qpair failed and we were unable to recover it. 00:33:57.254 [2024-07-20 19:04:07.459338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.254 [2024-07-20 19:04:07.459597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.254 [2024-07-20 19:04:07.459623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.254 [2024-07-20 19:04:07.459637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.254 [2024-07-20 19:04:07.459650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.254 [2024-07-20 19:04:07.459678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.254 qpair failed and we were unable to recover it. 00:33:57.254 [2024-07-20 19:04:07.469319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.254 [2024-07-20 19:04:07.469539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.254 [2024-07-20 19:04:07.469565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.254 [2024-07-20 19:04:07.469580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.254 [2024-07-20 19:04:07.469593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.254 [2024-07-20 19:04:07.469621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.254 qpair failed and we were unable to recover it. 00:33:57.254 [2024-07-20 19:04:07.479376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.254 [2024-07-20 19:04:07.479599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.254 [2024-07-20 19:04:07.479629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.254 [2024-07-20 19:04:07.479644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.254 [2024-07-20 19:04:07.479657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.254 [2024-07-20 19:04:07.479685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.254 qpair failed and we were unable to recover it. 00:33:57.254 [2024-07-20 19:04:07.489371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.254 [2024-07-20 19:04:07.489580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.254 [2024-07-20 19:04:07.489607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.254 [2024-07-20 19:04:07.489622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.254 [2024-07-20 19:04:07.489635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.254 [2024-07-20 19:04:07.489664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.254 qpair failed and we were unable to recover it. 00:33:57.254 [2024-07-20 19:04:07.499466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.254 [2024-07-20 19:04:07.499685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.254 [2024-07-20 19:04:07.499711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.254 [2024-07-20 19:04:07.499725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.254 [2024-07-20 19:04:07.499739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.254 [2024-07-20 19:04:07.499767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.254 qpair failed and we were unable to recover it. 00:33:57.254 [2024-07-20 19:04:07.509447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.254 [2024-07-20 19:04:07.509664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.254 [2024-07-20 19:04:07.509689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.254 [2024-07-20 19:04:07.509704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.254 [2024-07-20 19:04:07.509717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.254 [2024-07-20 19:04:07.509745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.254 qpair failed and we were unable to recover it. 00:33:57.254 [2024-07-20 19:04:07.519460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.254 [2024-07-20 19:04:07.519670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.254 [2024-07-20 19:04:07.519695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.254 [2024-07-20 19:04:07.519709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.254 [2024-07-20 19:04:07.519722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.254 [2024-07-20 19:04:07.519757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.254 qpair failed and we were unable to recover it. 00:33:57.254 [2024-07-20 19:04:07.529504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.254 [2024-07-20 19:04:07.529721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.254 [2024-07-20 19:04:07.529748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.254 [2024-07-20 19:04:07.529767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.254 [2024-07-20 19:04:07.529781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.254 [2024-07-20 19:04:07.529818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.254 qpair failed and we were unable to recover it. 00:33:57.254 [2024-07-20 19:04:07.539529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.254 [2024-07-20 19:04:07.539791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.254 [2024-07-20 19:04:07.539826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.254 [2024-07-20 19:04:07.539841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.254 [2024-07-20 19:04:07.539854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.254 [2024-07-20 19:04:07.539883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.254 qpair failed and we were unable to recover it. 00:33:57.254 [2024-07-20 19:04:07.549553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.254 [2024-07-20 19:04:07.549809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.254 [2024-07-20 19:04:07.549836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.254 [2024-07-20 19:04:07.549851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.254 [2024-07-20 19:04:07.549863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.255 [2024-07-20 19:04:07.549892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.255 qpair failed and we were unable to recover it. 00:33:57.255 [2024-07-20 19:04:07.559585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.255 [2024-07-20 19:04:07.559791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.255 [2024-07-20 19:04:07.559824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.255 [2024-07-20 19:04:07.559839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.255 [2024-07-20 19:04:07.559852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.255 [2024-07-20 19:04:07.559881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.255 qpair failed and we were unable to recover it. 00:33:57.255 [2024-07-20 19:04:07.569646] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.255 [2024-07-20 19:04:07.569860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.255 [2024-07-20 19:04:07.569892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.255 [2024-07-20 19:04:07.569907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.255 [2024-07-20 19:04:07.569920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.255 [2024-07-20 19:04:07.569949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.255 qpair failed and we were unable to recover it. 00:33:57.514 [2024-07-20 19:04:07.579664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.514 [2024-07-20 19:04:07.579923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.514 [2024-07-20 19:04:07.579951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.514 [2024-07-20 19:04:07.579967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.514 [2024-07-20 19:04:07.579980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.514 [2024-07-20 19:04:07.580011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.514 qpair failed and we were unable to recover it. 00:33:57.514 [2024-07-20 19:04:07.589689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.514 [2024-07-20 19:04:07.589913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.514 [2024-07-20 19:04:07.589940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.514 [2024-07-20 19:04:07.589956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.515 [2024-07-20 19:04:07.589969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.515 [2024-07-20 19:04:07.589999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.515 qpair failed and we were unable to recover it. 00:33:57.515 [2024-07-20 19:04:07.599776] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.515 [2024-07-20 19:04:07.599995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.515 [2024-07-20 19:04:07.600022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.515 [2024-07-20 19:04:07.600037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.515 [2024-07-20 19:04:07.600050] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.515 [2024-07-20 19:04:07.600079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.515 qpair failed and we were unable to recover it. 00:33:57.515 [2024-07-20 19:04:07.609739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.515 [2024-07-20 19:04:07.609951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.515 [2024-07-20 19:04:07.609977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.515 [2024-07-20 19:04:07.609992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.515 [2024-07-20 19:04:07.610010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.515 [2024-07-20 19:04:07.610040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.515 qpair failed and we were unable to recover it. 00:33:57.515 [2024-07-20 19:04:07.619828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.515 [2024-07-20 19:04:07.620045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.515 [2024-07-20 19:04:07.620070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.515 [2024-07-20 19:04:07.620084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.515 [2024-07-20 19:04:07.620097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.515 [2024-07-20 19:04:07.620126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.515 qpair failed and we were unable to recover it. 00:33:57.515 [2024-07-20 19:04:07.629901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.515 [2024-07-20 19:04:07.630119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.515 [2024-07-20 19:04:07.630145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.515 [2024-07-20 19:04:07.630159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.515 [2024-07-20 19:04:07.630172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.515 [2024-07-20 19:04:07.630201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.515 qpair failed and we were unable to recover it. 00:33:57.515 [2024-07-20 19:04:07.639912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.515 [2024-07-20 19:04:07.640120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.515 [2024-07-20 19:04:07.640147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.515 [2024-07-20 19:04:07.640162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.515 [2024-07-20 19:04:07.640175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.515 [2024-07-20 19:04:07.640205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.515 qpair failed and we were unable to recover it. 00:33:57.515 [2024-07-20 19:04:07.649952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.515 [2024-07-20 19:04:07.650158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.515 [2024-07-20 19:04:07.650184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.515 [2024-07-20 19:04:07.650199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.515 [2024-07-20 19:04:07.650212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.515 [2024-07-20 19:04:07.650241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.515 qpair failed and we were unable to recover it. 00:33:57.515 [2024-07-20 19:04:07.659883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.515 [2024-07-20 19:04:07.660099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.515 [2024-07-20 19:04:07.660124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.515 [2024-07-20 19:04:07.660139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.515 [2024-07-20 19:04:07.660152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.515 [2024-07-20 19:04:07.660181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.515 qpair failed and we were unable to recover it. 00:33:57.515 [2024-07-20 19:04:07.669944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.515 [2024-07-20 19:04:07.670198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.515 [2024-07-20 19:04:07.670224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.515 [2024-07-20 19:04:07.670238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.515 [2024-07-20 19:04:07.670252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.515 [2024-07-20 19:04:07.670280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.515 qpair failed and we were unable to recover it. 00:33:57.515 [2024-07-20 19:04:07.679966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.515 [2024-07-20 19:04:07.680179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.515 [2024-07-20 19:04:07.680205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.515 [2024-07-20 19:04:07.680219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.515 [2024-07-20 19:04:07.680232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.515 [2024-07-20 19:04:07.680261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.515 qpair failed and we were unable to recover it. 00:33:57.515 [2024-07-20 19:04:07.690063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.515 [2024-07-20 19:04:07.690269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.515 [2024-07-20 19:04:07.690295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.515 [2024-07-20 19:04:07.690310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.515 [2024-07-20 19:04:07.690323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.515 [2024-07-20 19:04:07.690351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.515 qpair failed and we were unable to recover it. 00:33:57.515 [2024-07-20 19:04:07.700103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.515 [2024-07-20 19:04:07.700361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.515 [2024-07-20 19:04:07.700386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.515 [2024-07-20 19:04:07.700400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.515 [2024-07-20 19:04:07.700419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.515 [2024-07-20 19:04:07.700448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.515 qpair failed and we were unable to recover it. 00:33:57.515 [2024-07-20 19:04:07.710110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.515 [2024-07-20 19:04:07.710342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.515 [2024-07-20 19:04:07.710368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.515 [2024-07-20 19:04:07.710383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.515 [2024-07-20 19:04:07.710396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.515 [2024-07-20 19:04:07.710427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.515 qpair failed and we were unable to recover it. 00:33:57.515 [2024-07-20 19:04:07.720075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.515 [2024-07-20 19:04:07.720315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.515 [2024-07-20 19:04:07.720341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.515 [2024-07-20 19:04:07.720355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.515 [2024-07-20 19:04:07.720369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.515 [2024-07-20 19:04:07.720399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.515 qpair failed and we were unable to recover it. 00:33:57.515 [2024-07-20 19:04:07.730117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.515 [2024-07-20 19:04:07.730331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.515 [2024-07-20 19:04:07.730357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.515 [2024-07-20 19:04:07.730371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.515 [2024-07-20 19:04:07.730384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.516 [2024-07-20 19:04:07.730413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.516 qpair failed and we were unable to recover it. 00:33:57.516 [2024-07-20 19:04:07.740129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.516 [2024-07-20 19:04:07.740350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.516 [2024-07-20 19:04:07.740376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.516 [2024-07-20 19:04:07.740390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.516 [2024-07-20 19:04:07.740403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.516 [2024-07-20 19:04:07.740431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.516 qpair failed and we were unable to recover it. 00:33:57.516 [2024-07-20 19:04:07.750148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.516 [2024-07-20 19:04:07.750372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.516 [2024-07-20 19:04:07.750398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.516 [2024-07-20 19:04:07.750412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.516 [2024-07-20 19:04:07.750425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.516 [2024-07-20 19:04:07.750453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.516 qpair failed and we were unable to recover it. 00:33:57.516 [2024-07-20 19:04:07.760249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.516 [2024-07-20 19:04:07.760454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.516 [2024-07-20 19:04:07.760479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.516 [2024-07-20 19:04:07.760494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.516 [2024-07-20 19:04:07.760507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.516 [2024-07-20 19:04:07.760536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.516 qpair failed and we were unable to recover it. 00:33:57.516 [2024-07-20 19:04:07.770209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.516 [2024-07-20 19:04:07.770410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.516 [2024-07-20 19:04:07.770436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.516 [2024-07-20 19:04:07.770450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.516 [2024-07-20 19:04:07.770464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.516 [2024-07-20 19:04:07.770491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.516 qpair failed and we were unable to recover it. 00:33:57.516 [2024-07-20 19:04:07.780239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.516 [2024-07-20 19:04:07.780450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.516 [2024-07-20 19:04:07.780475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.516 [2024-07-20 19:04:07.780490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.516 [2024-07-20 19:04:07.780503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.516 [2024-07-20 19:04:07.780531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.516 qpair failed and we were unable to recover it. 00:33:57.516 [2024-07-20 19:04:07.790333] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.516 [2024-07-20 19:04:07.790547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.516 [2024-07-20 19:04:07.790573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.516 [2024-07-20 19:04:07.790587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.516 [2024-07-20 19:04:07.790606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.516 [2024-07-20 19:04:07.790635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.516 qpair failed and we were unable to recover it. 00:33:57.516 [2024-07-20 19:04:07.800286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.516 [2024-07-20 19:04:07.800500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.516 [2024-07-20 19:04:07.800525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.516 [2024-07-20 19:04:07.800540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.516 [2024-07-20 19:04:07.800553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.516 [2024-07-20 19:04:07.800581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.516 qpair failed and we were unable to recover it. 00:33:57.516 [2024-07-20 19:04:07.810313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.516 [2024-07-20 19:04:07.810526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.516 [2024-07-20 19:04:07.810551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.516 [2024-07-20 19:04:07.810566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.516 [2024-07-20 19:04:07.810580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.516 [2024-07-20 19:04:07.810608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.516 qpair failed and we were unable to recover it. 00:33:57.516 [2024-07-20 19:04:07.820397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.516 [2024-07-20 19:04:07.820618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.516 [2024-07-20 19:04:07.820643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.516 [2024-07-20 19:04:07.820658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.516 [2024-07-20 19:04:07.820671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.516 [2024-07-20 19:04:07.820700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.516 qpair failed and we were unable to recover it. 00:33:57.516 [2024-07-20 19:04:07.830367] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.516 [2024-07-20 19:04:07.830587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.516 [2024-07-20 19:04:07.830613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.516 [2024-07-20 19:04:07.830627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.516 [2024-07-20 19:04:07.830640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.516 [2024-07-20 19:04:07.830668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.516 qpair failed and we were unable to recover it. 00:33:57.775 [2024-07-20 19:04:07.840435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.775 [2024-07-20 19:04:07.840681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.775 [2024-07-20 19:04:07.840710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.775 [2024-07-20 19:04:07.840725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.775 [2024-07-20 19:04:07.840738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe58840 00:33:57.775 [2024-07-20 19:04:07.840767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.775 qpair failed and we were unable to recover it. 00:33:57.775 [2024-07-20 19:04:07.840807] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:33:57.775 A controller has encountered a failure and is being reset. 00:33:57.775 [2024-07-20 19:04:07.840862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe66390 (9): Bad file descriptor 00:33:57.775 Controller properly reset. 00:33:58.341 Initializing NVMe Controllers 00:33:58.341 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:58.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:58.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:58.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:58.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:58.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:58.341 Initialization complete. Launching workers. 00:33:58.341 Starting thread on core 1 00:33:58.341 Starting thread on core 2 00:33:58.341 Starting thread on core 3 00:33:58.341 Starting thread on core 0 00:33:58.341 19:04:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:58.341 00:33:58.341 real 0m10.783s 00:33:58.341 user 0m18.249s 00:33:58.341 sys 0m5.858s 00:33:58.341 19:04:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:58.341 19:04:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:58.341 ************************************ 00:33:58.341 END TEST nvmf_target_disconnect_tc2 00:33:58.341 ************************************ 00:33:58.341 19:04:08 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:58.341 19:04:08 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:58.341 19:04:08 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:58.341 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:58.341 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:58.341 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:58.341 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:58.341 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:58.341 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:58.599 rmmod nvme_tcp 00:33:58.599 rmmod nvme_fabrics 00:33:58.599 rmmod nvme_keyring 00:33:58.599 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:58.599 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:58.599 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:58.599 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1543860 ']' 00:33:58.599 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1543860 00:33:58.600 19:04:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1543860 ']' 00:33:58.600 19:04:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 1543860 00:33:58.600 19:04:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:33:58.600 19:04:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:58.600 19:04:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1543860 00:33:58.600 19:04:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:33:58.600 19:04:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:33:58.600 19:04:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1543860' 00:33:58.600 killing process with pid 1543860 00:33:58.600 19:04:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 1543860 00:33:58.600 19:04:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 1543860 00:33:58.864 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:58.864 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:58.864 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:58.864 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:58.864 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:58.864 19:04:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.864 19:04:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:58.864 19:04:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.792 19:04:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:00.792 00:34:00.792 real 0m15.514s 00:34:00.792 user 0m44.277s 00:34:00.792 sys 0m7.875s 00:34:00.792 19:04:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:00.792 19:04:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:00.792 ************************************ 00:34:00.792 END TEST nvmf_target_disconnect 00:34:00.792 ************************************ 00:34:00.792 19:04:11 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:00.792 19:04:11 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:00.792 19:04:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:00.792 19:04:11 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:00.792 00:34:00.792 real 26m55.187s 00:34:00.792 user 73m19.607s 00:34:00.792 sys 6m17.192s 00:34:00.792 19:04:11 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:00.792 19:04:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:00.792 ************************************ 00:34:00.792 END TEST nvmf_tcp 00:34:00.792 ************************************ 00:34:00.792 19:04:11 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:00.792 19:04:11 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:00.792 19:04:11 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:00.792 19:04:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:00.792 19:04:11 -- common/autotest_common.sh@10 -- # set +x 00:34:01.052 ************************************ 00:34:01.052 START TEST spdkcli_nvmf_tcp 00:34:01.052 ************************************ 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:01.052 * Looking for test storage... 00:34:01.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1545059 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1545059 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 1545059 ']' 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:01.052 19:04:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:01.052 [2024-07-20 19:04:11.252743] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:01.052 [2024-07-20 19:04:11.252849] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545059 ] 00:34:01.052 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.052 [2024-07-20 19:04:11.311465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:01.312 [2024-07-20 19:04:11.399749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.312 [2024-07-20 19:04:11.399753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.312 19:04:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:01.312 19:04:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:01.312 19:04:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:01.312 19:04:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:01.312 19:04:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:01.312 19:04:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:01.312 19:04:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:01.312 19:04:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:01.312 19:04:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:01.312 19:04:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:01.312 19:04:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:01.312 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:01.312 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:01.312 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:01.312 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:01.312 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:01.312 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:01.312 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:01.312 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:01.312 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:01.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:01.312 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:01.312 ' 00:34:03.844 [2024-07-20 19:04:14.066179] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:05.216 [2024-07-20 19:04:15.290511] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:07.739 [2024-07-20 19:04:17.553525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:09.635 [2024-07-20 19:04:19.515928] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:11.008 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:11.008 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:11.008 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:11.008 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:11.009 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:11.009 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:11.009 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:11.009 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:11.009 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:11.009 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:11.009 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:11.009 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:11.009 19:04:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:11.009 19:04:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.009 19:04:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:11.009 19:04:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:11.009 19:04:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:11.009 19:04:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:11.009 19:04:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:11.009 19:04:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:11.267 19:04:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:11.267 19:04:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:11.267 19:04:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:11.267 19:04:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.267 19:04:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:11.524 19:04:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:11.524 19:04:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:11.524 19:04:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:11.524 19:04:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:11.524 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:11.524 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:11.524 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:11.524 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:11.524 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:11.524 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:11.524 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:11.524 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:11.524 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:11.524 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:11.524 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:11.524 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:11.524 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:11.524 ' 00:34:16.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:16.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:16.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:16.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:16.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:16.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:16.786 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:16.786 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:16.786 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:16.786 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:16.786 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:16.786 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:16.786 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:16.786 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:16.786 19:04:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:16.786 19:04:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.786 19:04:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:16.786 19:04:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1545059 00:34:16.786 19:04:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1545059 ']' 00:34:16.786 19:04:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1545059 00:34:16.786 19:04:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:34:16.786 19:04:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:16.786 19:04:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1545059 00:34:16.786 19:04:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:16.786 19:04:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:16.786 19:04:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1545059' 00:34:16.786 killing process with pid 1545059 00:34:16.786 19:04:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 1545059 00:34:16.786 19:04:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 1545059 00:34:16.786 19:04:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:16.786 19:04:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:16.786 19:04:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1545059 ']' 00:34:16.786 19:04:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1545059 00:34:16.786 19:04:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1545059 ']' 00:34:16.786 19:04:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1545059 00:34:16.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1545059) - No such process 00:34:16.786 19:04:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 1545059 is not found' 00:34:16.786 Process with pid 1545059 is not found 00:34:16.786 19:04:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:16.786 19:04:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:16.786 19:04:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:16.786 00:34:16.786 real 0m15.941s 00:34:16.786 user 0m33.612s 00:34:16.786 sys 0m0.827s 00:34:16.786 19:04:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:16.786 19:04:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:16.786 ************************************ 00:34:16.786 END TEST spdkcli_nvmf_tcp 00:34:16.786 ************************************ 00:34:16.786 19:04:27 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:16.786 19:04:27 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:16.786 19:04:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:16.786 19:04:27 -- common/autotest_common.sh@10 -- # set +x 00:34:17.046 ************************************ 00:34:17.046 START TEST nvmf_identify_passthru 00:34:17.046 ************************************ 00:34:17.046 19:04:27 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:17.046 * Looking for test storage... 00:34:17.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:17.046 19:04:27 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:17.046 19:04:27 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:17.046 19:04:27 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:17.046 19:04:27 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:17.046 19:04:27 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.046 19:04:27 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.046 19:04:27 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.046 19:04:27 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:17.046 19:04:27 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:17.046 19:04:27 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:17.046 19:04:27 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:17.046 19:04:27 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:17.046 19:04:27 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:17.046 19:04:27 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.046 19:04:27 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.046 19:04:27 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.046 19:04:27 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:17.046 19:04:27 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.046 19:04:27 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:17.046 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.046 19:04:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:17.046 19:04:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.047 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:17.047 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:17.047 19:04:27 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:17.047 19:04:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:18.950 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:18.951 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:18.951 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:18.951 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:18.951 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:18.951 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:19.210 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:19.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:19.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:34:19.210 00:34:19.210 --- 10.0.0.2 ping statistics --- 00:34:19.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.210 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:34:19.210 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:19.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:19.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:34:19.210 00:34:19.210 --- 10.0.0.1 ping statistics --- 00:34:19.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.210 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:34:19.210 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:19.210 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:19.210 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:19.210 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:19.210 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:19.210 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:19.210 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:19.210 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:19.210 19:04:29 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:19.210 19:04:29 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:19.210 19:04:29 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:19.210 19:04:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.210 19:04:29 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:19.210 19:04:29 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:34:19.210 19:04:29 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:34:19.210 19:04:29 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:34:19.210 19:04:29 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:34:19.210 19:04:29 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:19.210 19:04:29 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:19.210 19:04:29 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:19.210 19:04:29 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:19.210 19:04:29 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:34:19.210 19:04:29 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:34:19.210 19:04:29 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:34:19.210 19:04:29 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:34:19.210 19:04:29 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:19.210 19:04:29 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:19.210 19:04:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:19.210 19:04:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:19.210 19:04:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:19.210 EAL: No free 2048 kB hugepages reported on node 1 00:34:23.395 19:04:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:23.396 19:04:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:23.396 19:04:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:23.396 19:04:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:23.396 EAL: No free 2048 kB hugepages reported on node 1 00:34:27.625 19:04:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:27.625 19:04:37 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:27.625 19:04:37 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:27.625 19:04:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.625 19:04:37 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:27.625 19:04:37 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:27.625 19:04:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.625 19:04:37 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1549552 00:34:27.625 19:04:37 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:27.625 19:04:37 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:27.625 19:04:37 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1549552 00:34:27.625 19:04:37 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 1549552 ']' 00:34:27.625 19:04:37 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:27.625 19:04:37 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:27.625 19:04:37 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:27.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:27.625 19:04:37 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:27.625 19:04:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.625 [2024-07-20 19:04:37.828841] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:27.625 [2024-07-20 19:04:37.828930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:27.625 EAL: No free 2048 kB hugepages reported on node 1 00:34:27.625 [2024-07-20 19:04:37.897515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:27.884 [2024-07-20 19:04:37.991180] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:27.884 [2024-07-20 19:04:37.991229] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:27.884 [2024-07-20 19:04:37.991259] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:27.884 [2024-07-20 19:04:37.991271] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:27.884 [2024-07-20 19:04:37.991288] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:27.884 [2024-07-20 19:04:37.991336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:27.884 [2024-07-20 19:04:37.991397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:27.884 [2024-07-20 19:04:37.991463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:27.884 [2024-07-20 19:04:37.991465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.884 19:04:38 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:27.884 19:04:38 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:34:27.884 19:04:38 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:27.884 19:04:38 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.884 19:04:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.884 INFO: Log level set to 20 00:34:27.884 INFO: Requests: 00:34:27.884 { 00:34:27.884 "jsonrpc": "2.0", 00:34:27.884 "method": "nvmf_set_config", 00:34:27.884 "id": 1, 00:34:27.884 "params": { 00:34:27.884 "admin_cmd_passthru": { 00:34:27.884 "identify_ctrlr": true 00:34:27.884 } 00:34:27.884 } 00:34:27.884 } 00:34:27.884 00:34:27.884 INFO: response: 00:34:27.884 { 00:34:27.884 "jsonrpc": "2.0", 00:34:27.884 "id": 1, 00:34:27.884 "result": true 00:34:27.884 } 00:34:27.884 00:34:27.884 19:04:38 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.884 19:04:38 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:27.884 19:04:38 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.884 19:04:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.884 INFO: Setting log level to 20 00:34:27.884 INFO: Setting log level to 20 00:34:27.884 INFO: Log level set to 20 00:34:27.884 INFO: Log level set to 20 00:34:27.884 INFO: Requests: 00:34:27.884 { 00:34:27.884 "jsonrpc": "2.0", 00:34:27.884 "method": "framework_start_init", 00:34:27.884 "id": 1 00:34:27.884 } 00:34:27.884 00:34:27.884 INFO: Requests: 00:34:27.884 { 00:34:27.884 "jsonrpc": "2.0", 00:34:27.884 "method": "framework_start_init", 00:34:27.884 "id": 1 00:34:27.884 } 00:34:27.884 00:34:27.884 [2024-07-20 19:04:38.134158] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:27.884 INFO: response: 00:34:27.884 { 00:34:27.884 "jsonrpc": "2.0", 00:34:27.884 "id": 1, 00:34:27.884 "result": true 00:34:27.884 } 00:34:27.884 00:34:27.884 INFO: response: 00:34:27.884 { 00:34:27.884 "jsonrpc": "2.0", 00:34:27.884 "id": 1, 00:34:27.884 "result": true 00:34:27.884 } 00:34:27.884 00:34:27.884 19:04:38 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.884 19:04:38 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:27.884 19:04:38 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.884 19:04:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.884 INFO: Setting log level to 40 00:34:27.884 INFO: Setting log level to 40 00:34:27.884 INFO: Setting log level to 40 00:34:27.884 [2024-07-20 19:04:38.144224] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:27.884 19:04:38 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.884 19:04:38 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:27.884 19:04:38 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:27.884 19:04:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.884 19:04:38 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:27.884 19:04:38 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.884 19:04:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:31.165 Nvme0n1 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:31.165 [2024-07-20 19:04:41.038165] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:31.165 [ 00:34:31.165 { 00:34:31.165 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:31.165 "subtype": "Discovery", 00:34:31.165 "listen_addresses": [], 00:34:31.165 "allow_any_host": true, 00:34:31.165 "hosts": [] 00:34:31.165 }, 00:34:31.165 { 00:34:31.165 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:31.165 "subtype": "NVMe", 00:34:31.165 "listen_addresses": [ 00:34:31.165 { 00:34:31.165 "trtype": "TCP", 00:34:31.165 "adrfam": "IPv4", 00:34:31.165 "traddr": "10.0.0.2", 00:34:31.165 "trsvcid": "4420" 00:34:31.165 } 00:34:31.165 ], 00:34:31.165 "allow_any_host": true, 00:34:31.165 "hosts": [], 00:34:31.165 "serial_number": "SPDK00000000000001", 00:34:31.165 "model_number": "SPDK bdev Controller", 00:34:31.165 "max_namespaces": 1, 00:34:31.165 "min_cntlid": 1, 00:34:31.165 "max_cntlid": 65519, 00:34:31.165 "namespaces": [ 00:34:31.165 { 00:34:31.165 "nsid": 1, 00:34:31.165 "bdev_name": "Nvme0n1", 00:34:31.165 "name": "Nvme0n1", 00:34:31.165 "nguid": "7E86B55F469945C5828703C17E0D0C20", 00:34:31.165 "uuid": "7e86b55f-4699-45c5-8287-03c17e0d0c20" 00:34:31.165 } 00:34:31.165 ] 00:34:31.165 } 00:34:31.165 ] 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:31.165 EAL: No free 2048 kB hugepages reported on node 1 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:31.165 EAL: No free 2048 kB hugepages reported on node 1 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:31.165 19:04:41 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:31.165 19:04:41 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:31.165 19:04:41 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:31.165 19:04:41 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:31.165 19:04:41 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:31.165 19:04:41 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:31.165 19:04:41 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:31.165 rmmod nvme_tcp 00:34:31.165 rmmod nvme_fabrics 00:34:31.165 rmmod nvme_keyring 00:34:31.165 19:04:41 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:31.165 19:04:41 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:31.165 19:04:41 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:31.165 19:04:41 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1549552 ']' 00:34:31.165 19:04:41 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1549552 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 1549552 ']' 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 1549552 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:34:31.165 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:31.422 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1549552 00:34:31.422 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:31.422 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:31.422 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1549552' 00:34:31.422 killing process with pid 1549552 00:34:31.422 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 1549552 00:34:31.422 19:04:41 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 1549552 00:34:32.793 19:04:43 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:32.793 19:04:43 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:32.793 19:04:43 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:32.793 19:04:43 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:32.793 19:04:43 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:32.793 19:04:43 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.793 19:04:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:32.793 19:04:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.316 19:04:45 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:35.316 00:34:35.316 real 0m17.966s 00:34:35.316 user 0m26.588s 00:34:35.316 sys 0m2.360s 00:34:35.317 19:04:45 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:35.317 19:04:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:35.317 ************************************ 00:34:35.317 END TEST nvmf_identify_passthru 00:34:35.317 ************************************ 00:34:35.317 19:04:45 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:35.317 19:04:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:35.317 19:04:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:35.317 19:04:45 -- common/autotest_common.sh@10 -- # set +x 00:34:35.317 ************************************ 00:34:35.317 START TEST nvmf_dif 00:34:35.317 ************************************ 00:34:35.317 19:04:45 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:35.317 * Looking for test storage... 00:34:35.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:35.317 19:04:45 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.317 19:04:45 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.317 19:04:45 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.317 19:04:45 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.317 19:04:45 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.317 19:04:45 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.317 19:04:45 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.317 19:04:45 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:35.317 19:04:45 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:35.317 19:04:45 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:35.317 19:04:45 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:35.317 19:04:45 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:35.317 19:04:45 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:35.317 19:04:45 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.317 19:04:45 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:35.317 19:04:45 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:35.317 19:04:45 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:35.317 19:04:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:37.247 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:37.247 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:37.247 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:37.247 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:37.247 19:04:47 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:37.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:37.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:34:37.248 00:34:37.248 --- 10.0.0.2 ping statistics --- 00:34:37.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.248 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:37.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:37.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:34:37.248 00:34:37.248 --- 10.0.0.1 ping statistics --- 00:34:37.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.248 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:37.248 19:04:47 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:38.184 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:38.184 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:38.184 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:38.184 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:38.184 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:38.184 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:38.184 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:38.184 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:38.184 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:38.184 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:38.184 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:38.184 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:38.184 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:38.184 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:38.184 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:38.184 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:38.184 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:38.184 19:04:48 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:38.184 19:04:48 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:38.184 19:04:48 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:38.184 19:04:48 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:38.184 19:04:48 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:38.184 19:04:48 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:38.184 19:04:48 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:38.184 19:04:48 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:38.184 19:04:48 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:38.184 19:04:48 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:38.184 19:04:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:38.184 19:04:48 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1552698 00:34:38.184 19:04:48 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:38.184 19:04:48 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1552698 00:34:38.184 19:04:48 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 1552698 ']' 00:34:38.184 19:04:48 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.184 19:04:48 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:38.184 19:04:48 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.184 19:04:48 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:38.184 19:04:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:38.443 [2024-07-20 19:04:48.543326] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:38.443 [2024-07-20 19:04:48.543408] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:38.443 EAL: No free 2048 kB hugepages reported on node 1 00:34:38.443 [2024-07-20 19:04:48.613157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.443 [2024-07-20 19:04:48.700184] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:38.444 [2024-07-20 19:04:48.700240] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:38.444 [2024-07-20 19:04:48.700254] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:38.444 [2024-07-20 19:04:48.700265] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:38.444 [2024-07-20 19:04:48.700274] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:38.444 [2024-07-20 19:04:48.700308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.703 19:04:48 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:38.703 19:04:48 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:34:38.703 19:04:48 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:38.703 19:04:48 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:38.703 19:04:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:38.703 19:04:48 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:38.703 19:04:48 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:38.703 19:04:48 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:38.703 19:04:48 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.703 19:04:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:38.703 [2024-07-20 19:04:48.836606] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:38.703 19:04:48 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.703 19:04:48 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:38.703 19:04:48 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:38.703 19:04:48 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:38.703 19:04:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:38.703 ************************************ 00:34:38.703 START TEST fio_dif_1_default 00:34:38.703 ************************************ 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:38.703 bdev_null0 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:38.703 [2024-07-20 19:04:48.892907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:38.703 19:04:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:38.704 { 00:34:38.704 "params": { 00:34:38.704 "name": "Nvme$subsystem", 00:34:38.704 "trtype": "$TEST_TRANSPORT", 00:34:38.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:38.704 "adrfam": "ipv4", 00:34:38.704 "trsvcid": "$NVMF_PORT", 00:34:38.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:38.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:38.704 "hdgst": ${hdgst:-false}, 00:34:38.704 "ddgst": ${ddgst:-false} 00:34:38.704 }, 00:34:38.704 "method": "bdev_nvme_attach_controller" 00:34:38.704 } 00:34:38.704 EOF 00:34:38.704 )") 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:38.704 "params": { 00:34:38.704 "name": "Nvme0", 00:34:38.704 "trtype": "tcp", 00:34:38.704 "traddr": "10.0.0.2", 00:34:38.704 "adrfam": "ipv4", 00:34:38.704 "trsvcid": "4420", 00:34:38.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:38.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:38.704 "hdgst": false, 00:34:38.704 "ddgst": false 00:34:38.704 }, 00:34:38.704 "method": "bdev_nvme_attach_controller" 00:34:38.704 }' 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:38.704 19:04:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:38.963 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:38.963 fio-3.35 00:34:38.963 Starting 1 thread 00:34:38.963 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.164 00:34:51.164 filename0: (groupid=0, jobs=1): err= 0: pid=1552927: Sat Jul 20 19:04:59 2024 00:34:51.164 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10001msec) 00:34:51.164 slat (nsec): min=4161, max=63023, avg=9572.01, stdev=5236.13 00:34:51.164 clat (usec): min=41817, max=45598, avg=41989.31, stdev=246.23 00:34:51.164 lat (usec): min=41824, max=45635, avg=41998.88, stdev=246.50 00:34:51.164 clat percentiles (usec): 00:34:51.164 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:34:51.164 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:51.164 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:51.164 | 99.00th=[42206], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:34:51.164 | 99.99th=[45351] 00:34:51.164 bw ( KiB/s): min= 352, max= 384, per=99.80%, avg=380.63, stdev=10.09, samples=19 00:34:51.164 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:34:51.164 lat (msec) : 50=100.00% 00:34:51.164 cpu : usr=90.00%, sys=9.72%, ctx=18, majf=0, minf=272 00:34:51.164 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.164 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.164 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:51.164 00:34:51.164 Run status group 0 (all jobs): 00:34:51.164 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3808KiB (3899kB), run=10001-10001msec 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.164 00:34:51.164 real 0m11.196s 00:34:51.164 user 0m10.183s 00:34:51.164 sys 0m1.227s 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:51.164 ************************************ 00:34:51.164 END TEST fio_dif_1_default 00:34:51.164 ************************************ 00:34:51.164 19:05:00 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:51.164 19:05:00 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:51.164 19:05:00 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:51.164 19:05:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:51.164 ************************************ 00:34:51.164 START TEST fio_dif_1_multi_subsystems 00:34:51.164 ************************************ 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:51.164 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.165 bdev_null0 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.165 [2024-07-20 19:05:00.136680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.165 bdev_null1 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:51.165 { 00:34:51.165 "params": { 00:34:51.165 "name": "Nvme$subsystem", 00:34:51.165 "trtype": "$TEST_TRANSPORT", 00:34:51.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:51.165 "adrfam": "ipv4", 00:34:51.165 "trsvcid": "$NVMF_PORT", 00:34:51.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:51.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:51.165 "hdgst": ${hdgst:-false}, 00:34:51.165 "ddgst": ${ddgst:-false} 00:34:51.165 }, 00:34:51.165 "method": "bdev_nvme_attach_controller" 00:34:51.165 } 00:34:51.165 EOF 00:34:51.165 )") 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:51.165 { 00:34:51.165 "params": { 00:34:51.165 "name": "Nvme$subsystem", 00:34:51.165 "trtype": "$TEST_TRANSPORT", 00:34:51.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:51.165 "adrfam": "ipv4", 00:34:51.165 "trsvcid": "$NVMF_PORT", 00:34:51.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:51.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:51.165 "hdgst": ${hdgst:-false}, 00:34:51.165 "ddgst": ${ddgst:-false} 00:34:51.165 }, 00:34:51.165 "method": "bdev_nvme_attach_controller" 00:34:51.165 } 00:34:51.165 EOF 00:34:51.165 )") 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:51.165 "params": { 00:34:51.165 "name": "Nvme0", 00:34:51.165 "trtype": "tcp", 00:34:51.165 "traddr": "10.0.0.2", 00:34:51.165 "adrfam": "ipv4", 00:34:51.165 "trsvcid": "4420", 00:34:51.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:51.165 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:51.165 "hdgst": false, 00:34:51.165 "ddgst": false 00:34:51.165 }, 00:34:51.165 "method": "bdev_nvme_attach_controller" 00:34:51.165 },{ 00:34:51.165 "params": { 00:34:51.165 "name": "Nvme1", 00:34:51.165 "trtype": "tcp", 00:34:51.165 "traddr": "10.0.0.2", 00:34:51.165 "adrfam": "ipv4", 00:34:51.165 "trsvcid": "4420", 00:34:51.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:51.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:51.165 "hdgst": false, 00:34:51.165 "ddgst": false 00:34:51.165 }, 00:34:51.165 "method": "bdev_nvme_attach_controller" 00:34:51.165 }' 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:51.165 19:05:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.165 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:51.165 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:51.165 fio-3.35 00:34:51.165 Starting 2 threads 00:34:51.165 EAL: No free 2048 kB hugepages reported on node 1 00:35:01.131 00:35:01.131 filename0: (groupid=0, jobs=1): err= 0: pid=1554325: Sat Jul 20 19:05:11 2024 00:35:01.131 read: IOPS=185, BW=742KiB/s (760kB/s)(7440KiB/10022msec) 00:35:01.131 slat (nsec): min=7072, max=39397, avg=8921.11, stdev=3350.20 00:35:01.131 clat (usec): min=1167, max=42757, avg=21524.75, stdev=20181.87 00:35:01.131 lat (usec): min=1174, max=42797, avg=21533.67, stdev=20181.43 00:35:01.131 clat percentiles (usec): 00:35:01.131 | 1.00th=[ 1188], 5.00th=[ 1237], 10.00th=[ 1270], 20.00th=[ 1287], 00:35:01.131 | 30.00th=[ 1303], 40.00th=[ 1336], 50.00th=[41681], 60.00th=[41681], 00:35:01.131 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:35:01.131 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:01.131 | 99.99th=[42730] 00:35:01.131 bw ( KiB/s): min= 704, max= 768, per=56.96%, avg=742.40, stdev=32.17, samples=20 00:35:01.131 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:35:01.131 lat (msec) : 2=49.89%, 50=50.11% 00:35:01.131 cpu : usr=94.32%, sys=5.38%, ctx=18, majf=0, minf=137 00:35:01.131 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.131 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.131 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:01.131 filename1: (groupid=0, jobs=1): err= 0: pid=1554326: Sat Jul 20 19:05:11 2024 00:35:01.131 read: IOPS=140, BW=561KiB/s (575kB/s)(5616KiB/10008msec) 00:35:01.131 slat (nsec): min=7140, max=62858, avg=9174.42, stdev=4204.47 00:35:01.131 clat (usec): min=1136, max=43484, avg=28483.61, stdev=19071.22 00:35:01.131 lat (usec): min=1144, max=43496, avg=28492.78, stdev=19070.86 00:35:01.131 clat percentiles (usec): 00:35:01.131 | 1.00th=[ 1254], 5.00th=[ 1287], 10.00th=[ 1303], 20.00th=[ 1319], 00:35:01.131 | 30.00th=[ 1549], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:35:01.131 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:01.131 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:35:01.131 | 99.99th=[43254] 00:35:01.131 bw ( KiB/s): min= 352, max= 768, per=42.99%, avg=560.00, stdev=179.67, samples=20 00:35:01.131 iops : min= 88, max= 192, avg=140.00, stdev=44.92, samples=20 00:35:01.131 lat (msec) : 2=33.05%, 50=66.95% 00:35:01.131 cpu : usr=94.60%, sys=5.11%, ctx=11, majf=0, minf=131 00:35:01.131 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.131 issued rwts: total=1404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.131 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:01.131 00:35:01.131 Run status group 0 (all jobs): 00:35:01.131 READ: bw=1303KiB/s (1334kB/s), 561KiB/s-742KiB/s (575kB/s-760kB/s), io=12.8MiB (13.4MB), run=10008-10022msec 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.131 00:35:01.131 real 0m11.329s 00:35:01.131 user 0m20.226s 00:35:01.131 sys 0m1.349s 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:01.131 19:05:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:01.131 ************************************ 00:35:01.131 END TEST fio_dif_1_multi_subsystems 00:35:01.131 ************************************ 00:35:01.391 19:05:11 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:01.391 19:05:11 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:01.391 19:05:11 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:01.391 19:05:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:01.391 ************************************ 00:35:01.391 START TEST fio_dif_rand_params 00:35:01.391 ************************************ 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.391 bdev_null0 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.391 [2024-07-20 19:05:11.511559] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:01.391 { 00:35:01.391 "params": { 00:35:01.391 "name": "Nvme$subsystem", 00:35:01.391 "trtype": "$TEST_TRANSPORT", 00:35:01.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:01.391 "adrfam": "ipv4", 00:35:01.391 "trsvcid": "$NVMF_PORT", 00:35:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:01.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:01.391 "hdgst": ${hdgst:-false}, 00:35:01.391 "ddgst": ${ddgst:-false} 00:35:01.391 }, 00:35:01.391 "method": "bdev_nvme_attach_controller" 00:35:01.391 } 00:35:01.391 EOF 00:35:01.391 )") 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:01.391 "params": { 00:35:01.391 "name": "Nvme0", 00:35:01.391 "trtype": "tcp", 00:35:01.391 "traddr": "10.0.0.2", 00:35:01.391 "adrfam": "ipv4", 00:35:01.391 "trsvcid": "4420", 00:35:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:01.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:01.391 "hdgst": false, 00:35:01.391 "ddgst": false 00:35:01.391 }, 00:35:01.391 "method": "bdev_nvme_attach_controller" 00:35:01.391 }' 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:01.391 19:05:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.650 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:01.650 ... 00:35:01.650 fio-3.35 00:35:01.650 Starting 3 threads 00:35:01.650 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.214 00:35:08.214 filename0: (groupid=0, jobs=1): err= 0: pid=1555717: Sat Jul 20 19:05:17 2024 00:35:08.214 read: IOPS=172, BW=21.5MiB/s (22.5MB/s)(108MiB/5023msec) 00:35:08.214 slat (nsec): min=5256, max=68310, avg=12180.17, stdev=3705.18 00:35:08.214 clat (usec): min=8377, max=96625, avg=17419.88, stdev=14911.41 00:35:08.214 lat (usec): min=8389, max=96638, avg=17432.06, stdev=14911.49 00:35:08.214 clat percentiles (usec): 00:35:08.214 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[10159], 00:35:08.214 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11863], 60.00th=[12780], 00:35:08.214 | 70.00th=[13566], 80.00th=[14877], 90.00th=[52167], 95.00th=[54264], 00:35:08.214 | 99.00th=[56361], 99.50th=[56886], 99.90th=[96994], 99.95th=[96994], 00:35:08.214 | 99.99th=[96994] 00:35:08.214 bw ( KiB/s): min=13312, max=32768, per=35.05%, avg=22046.20, stdev=5711.81, samples=10 00:35:08.214 iops : min= 104, max= 256, avg=172.20, stdev=44.62, samples=10 00:35:08.214 lat (msec) : 10=18.75%, 20=68.17%, 100=13.08% 00:35:08.214 cpu : usr=90.42%, sys=8.90%, ctx=12, majf=0, minf=137 00:35:08.214 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:08.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.214 issued rwts: total=864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:08.214 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:08.214 filename0: (groupid=0, jobs=1): err= 0: pid=1555718: Sat Jul 20 19:05:17 2024 00:35:08.214 read: IOPS=159, BW=19.9MiB/s (20.9MB/s)(100MiB/5020msec) 00:35:08.214 slat (nsec): min=5226, max=34372, avg=11851.49, stdev=3438.53 00:35:08.214 clat (usec): min=7538, max=95807, avg=18802.53, stdev=15746.77 00:35:08.215 lat (usec): min=7550, max=95821, avg=18814.38, stdev=15746.85 00:35:08.215 clat percentiles (usec): 00:35:08.215 | 1.00th=[ 8029], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10683], 00:35:08.215 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12387], 60.00th=[13304], 00:35:08.215 | 70.00th=[14091], 80.00th=[15008], 90.00th=[53216], 95.00th=[54264], 00:35:08.215 | 99.00th=[56886], 99.50th=[58983], 99.90th=[95945], 99.95th=[95945], 00:35:08.215 | 99.99th=[95945] 00:35:08.215 bw ( KiB/s): min=15872, max=26880, per=32.44%, avg=20403.20, stdev=3335.76, samples=10 00:35:08.215 iops : min= 124, max= 210, avg=159.40, stdev=26.06, samples=10 00:35:08.215 lat (msec) : 10=10.62%, 20=73.50%, 100=15.88% 00:35:08.215 cpu : usr=90.95%, sys=8.37%, ctx=9, majf=0, minf=103 00:35:08.215 IO depths : 1=3.0%, 2=97.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:08.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.215 issued rwts: total=800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:08.215 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:08.215 filename0: (groupid=0, jobs=1): err= 0: pid=1555719: Sat Jul 20 19:05:17 2024 00:35:08.215 read: IOPS=161, BW=20.2MiB/s (21.2MB/s)(102MiB/5047msec) 00:35:08.215 slat (nsec): min=5312, max=33966, avg=12132.51, stdev=3540.67 00:35:08.215 clat (usec): min=7586, max=93152, avg=18429.17, stdev=15550.57 00:35:08.215 lat (usec): min=7599, max=93165, avg=18441.31, stdev=15550.24 00:35:08.215 clat percentiles (usec): 00:35:08.215 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10421], 00:35:08.215 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12387], 60.00th=[13042], 00:35:08.215 | 70.00th=[13960], 80.00th=[15008], 90.00th=[53216], 95.00th=[54264], 00:35:08.215 | 99.00th=[55837], 99.50th=[60031], 99.90th=[92799], 99.95th=[92799], 00:35:08.215 | 99.99th=[92799] 00:35:08.215 bw ( KiB/s): min=10496, max=30208, per=33.09%, avg=20812.80, stdev=5637.23, samples=10 00:35:08.215 iops : min= 82, max= 236, avg=162.60, stdev=44.04, samples=10 00:35:08.215 lat (msec) : 10=14.46%, 20=70.22%, 50=0.25%, 100=15.07% 00:35:08.215 cpu : usr=90.86%, sys=8.46%, ctx=11, majf=0, minf=111 00:35:08.215 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:08.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.215 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:08.215 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:08.215 00:35:08.215 Run status group 0 (all jobs): 00:35:08.215 READ: bw=61.4MiB/s (64.4MB/s), 19.9MiB/s-21.5MiB/s (20.9MB/s-22.5MB/s), io=310MiB (325MB), run=5020-5047msec 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.215 bdev_null0 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.215 [2024-07-20 19:05:17.765721] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.215 bdev_null1 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.215 bdev_null2 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:08.215 { 00:35:08.215 "params": { 00:35:08.215 "name": "Nvme$subsystem", 00:35:08.215 "trtype": "$TEST_TRANSPORT", 00:35:08.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:08.215 "adrfam": "ipv4", 00:35:08.215 "trsvcid": "$NVMF_PORT", 00:35:08.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:08.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:08.215 "hdgst": ${hdgst:-false}, 00:35:08.215 "ddgst": ${ddgst:-false} 00:35:08.215 }, 00:35:08.215 "method": "bdev_nvme_attach_controller" 00:35:08.215 } 00:35:08.215 EOF 00:35:08.215 )") 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:08.215 { 00:35:08.215 "params": { 00:35:08.215 "name": "Nvme$subsystem", 00:35:08.215 "trtype": "$TEST_TRANSPORT", 00:35:08.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:08.215 "adrfam": "ipv4", 00:35:08.215 "trsvcid": "$NVMF_PORT", 00:35:08.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:08.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:08.215 "hdgst": ${hdgst:-false}, 00:35:08.215 "ddgst": ${ddgst:-false} 00:35:08.215 }, 00:35:08.215 "method": "bdev_nvme_attach_controller" 00:35:08.215 } 00:35:08.215 EOF 00:35:08.215 )") 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:08.215 { 00:35:08.215 "params": { 00:35:08.215 "name": "Nvme$subsystem", 00:35:08.215 "trtype": "$TEST_TRANSPORT", 00:35:08.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:08.215 "adrfam": "ipv4", 00:35:08.215 "trsvcid": "$NVMF_PORT", 00:35:08.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:08.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:08.215 "hdgst": ${hdgst:-false}, 00:35:08.215 "ddgst": ${ddgst:-false} 00:35:08.215 }, 00:35:08.215 "method": "bdev_nvme_attach_controller" 00:35:08.215 } 00:35:08.215 EOF 00:35:08.215 )") 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:08.215 "params": { 00:35:08.215 "name": "Nvme0", 00:35:08.215 "trtype": "tcp", 00:35:08.215 "traddr": "10.0.0.2", 00:35:08.215 "adrfam": "ipv4", 00:35:08.215 "trsvcid": "4420", 00:35:08.215 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:08.215 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:08.215 "hdgst": false, 00:35:08.215 "ddgst": false 00:35:08.215 }, 00:35:08.215 "method": "bdev_nvme_attach_controller" 00:35:08.215 },{ 00:35:08.215 "params": { 00:35:08.215 "name": "Nvme1", 00:35:08.215 "trtype": "tcp", 00:35:08.215 "traddr": "10.0.0.2", 00:35:08.215 "adrfam": "ipv4", 00:35:08.215 "trsvcid": "4420", 00:35:08.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:08.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:08.215 "hdgst": false, 00:35:08.215 "ddgst": false 00:35:08.215 }, 00:35:08.215 "method": "bdev_nvme_attach_controller" 00:35:08.215 },{ 00:35:08.215 "params": { 00:35:08.215 "name": "Nvme2", 00:35:08.215 "trtype": "tcp", 00:35:08.215 "traddr": "10.0.0.2", 00:35:08.215 "adrfam": "ipv4", 00:35:08.215 "trsvcid": "4420", 00:35:08.215 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:08.215 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:08.215 "hdgst": false, 00:35:08.215 "ddgst": false 00:35:08.215 }, 00:35:08.215 "method": "bdev_nvme_attach_controller" 00:35:08.215 }' 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:08.215 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.216 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.216 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:08.216 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:08.216 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:08.216 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:08.216 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:08.216 19:05:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.216 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:08.216 ... 00:35:08.216 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:08.216 ... 00:35:08.216 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:08.216 ... 00:35:08.216 fio-3.35 00:35:08.216 Starting 24 threads 00:35:08.216 EAL: No free 2048 kB hugepages reported on node 1 00:35:20.437 00:35:20.437 filename0: (groupid=0, jobs=1): err= 0: pid=1556582: Sat Jul 20 19:05:29 2024 00:35:20.437 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.4MiB/10162msec) 00:35:20.437 slat (usec): min=8, max=113, avg=29.07, stdev=12.71 00:35:20.437 clat (msec): min=12, max=190, avg=32.60, stdev=10.39 00:35:20.437 lat (msec): min=12, max=190, avg=32.63, stdev=10.39 00:35:20.437 clat percentiles (msec): 00:35:20.437 | 1.00th=[ 20], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:35:20.437 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:35:20.437 | 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 36], 95.00th=[ 41], 00:35:20.437 | 99.00th=[ 56], 99.50th=[ 85], 99.90th=[ 190], 99.95th=[ 190], 00:35:20.437 | 99.99th=[ 190] 00:35:20.437 bw ( KiB/s): min= 1664, max= 2055, per=4.42%, avg=1975.55, stdev=111.65, samples=20 00:35:20.437 iops : min= 416, max= 513, avg=493.85, stdev=27.88, samples=20 00:35:20.437 lat (msec) : 20=1.19%, 50=97.32%, 100=1.17%, 250=0.32% 00:35:20.437 cpu : usr=92.82%, sys=3.37%, ctx=240, majf=0, minf=25 00:35:20.437 IO depths : 1=2.2%, 2=4.8%, 4=16.9%, 8=64.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:35:20.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.437 complete : 0=0.0%, 4=93.1%, 8=2.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.437 issued rwts: total=4956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.437 filename0: (groupid=0, jobs=1): err= 0: pid=1556583: Sat Jul 20 19:05:29 2024 00:35:20.437 read: IOPS=471, BW=1886KiB/s (1932kB/s)(18.8MiB/10183msec) 00:35:20.437 slat (usec): min=7, max=219, avg=26.13, stdev=16.41 00:35:20.437 clat (msec): min=9, max=189, avg=33.78, stdev=11.61 00:35:20.437 lat (msec): min=9, max=190, avg=33.81, stdev=11.61 00:35:20.437 clat percentiles (msec): 00:35:20.437 | 1.00th=[ 15], 5.00th=[ 24], 10.00th=[ 29], 20.00th=[ 31], 00:35:20.437 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 33], 00:35:20.437 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 42], 95.00th=[ 47], 00:35:20.437 | 99.00th=[ 59], 99.50th=[ 92], 99.90th=[ 190], 99.95th=[ 190], 00:35:20.437 | 99.99th=[ 190] 00:35:20.437 bw ( KiB/s): min= 1712, max= 2016, per=4.28%, avg=1914.20, stdev=91.17, samples=20 00:35:20.437 iops : min= 428, max= 504, avg=478.55, stdev=22.79, samples=20 00:35:20.437 lat (msec) : 10=0.10%, 20=2.60%, 50=93.94%, 100=3.02%, 250=0.33% 00:35:20.437 cpu : usr=91.64%, sys=3.76%, ctx=193, majf=0, minf=42 00:35:20.437 IO depths : 1=0.5%, 2=1.2%, 4=9.6%, 8=74.5%, 16=14.2%, 32=0.0%, >=64=0.0% 00:35:20.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.437 complete : 0=0.0%, 4=90.7%, 8=5.7%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.437 issued rwts: total=4802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.437 filename0: (groupid=0, jobs=1): err= 0: pid=1556584: Sat Jul 20 19:05:29 2024 00:35:20.437 read: IOPS=458, BW=1833KiB/s (1877kB/s)(18.2MiB/10161msec) 00:35:20.437 slat (usec): min=8, max=149, avg=32.64, stdev=18.61 00:35:20.437 clat (msec): min=8, max=190, avg=34.67, stdev=11.75 00:35:20.437 lat (msec): min=8, max=190, avg=34.71, stdev=11.75 00:35:20.437 clat percentiles (msec): 00:35:20.437 | 1.00th=[ 19], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:35:20.437 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:35:20.437 | 70.00th=[ 34], 80.00th=[ 39], 90.00th=[ 44], 95.00th=[ 47], 00:35:20.437 | 99.00th=[ 83], 99.50th=[ 85], 99.90th=[ 190], 99.95th=[ 190], 00:35:20.437 | 99.99th=[ 190] 00:35:20.437 bw ( KiB/s): min= 1536, max= 2048, per=4.15%, avg=1855.75, stdev=120.87, samples=20 00:35:20.437 iops : min= 384, max= 512, avg=463.90, stdev=30.28, samples=20 00:35:20.437 lat (msec) : 10=0.15%, 20=1.78%, 50=95.38%, 100=2.34%, 250=0.34% 00:35:20.437 cpu : usr=93.89%, sys=2.86%, ctx=94, majf=0, minf=29 00:35:20.437 IO depths : 1=3.3%, 2=6.9%, 4=21.0%, 8=59.4%, 16=9.4%, 32=0.0%, >=64=0.0% 00:35:20.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.437 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.437 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.437 filename0: (groupid=0, jobs=1): err= 0: pid=1556585: Sat Jul 20 19:05:29 2024 00:35:20.437 read: IOPS=456, BW=1825KiB/s (1869kB/s)(18.1MiB/10140msec) 00:35:20.437 slat (usec): min=8, max=100, avg=28.41, stdev=19.19 00:35:20.437 clat (msec): min=13, max=200, avg=34.77, stdev=11.10 00:35:20.437 lat (msec): min=13, max=200, avg=34.79, stdev=11.09 00:35:20.437 clat percentiles (msec): 00:35:20.437 | 1.00th=[ 21], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:35:20.437 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:35:20.437 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 45], 95.00th=[ 48], 00:35:20.437 | 99.00th=[ 55], 99.50th=[ 58], 99.90th=[ 201], 99.95th=[ 201], 00:35:20.437 | 99.99th=[ 201] 00:35:20.437 bw ( KiB/s): min= 1552, max= 2048, per=4.12%, avg=1844.40, stdev=111.26, samples=20 00:35:20.437 iops : min= 388, max= 512, avg=461.10, stdev=27.81, samples=20 00:35:20.437 lat (msec) : 20=0.97%, 50=96.11%, 100=2.57%, 250=0.35% 00:35:20.437 cpu : usr=97.75%, sys=1.80%, ctx=16, majf=0, minf=41 00:35:20.437 IO depths : 1=0.1%, 2=0.3%, 4=8.7%, 8=75.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:35:20.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.437 complete : 0=0.0%, 4=91.1%, 8=6.2%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.437 issued rwts: total=4627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.437 filename0: (groupid=0, jobs=1): err= 0: pid=1556586: Sat Jul 20 19:05:29 2024 00:35:20.437 read: IOPS=448, BW=1796KiB/s (1839kB/s)(17.8MiB/10143msec) 00:35:20.437 slat (usec): min=8, max=103, avg=23.56, stdev=12.99 00:35:20.437 clat (msec): min=14, max=187, avg=35.29, stdev=10.82 00:35:20.437 lat (msec): min=14, max=187, avg=35.32, stdev=10.82 00:35:20.437 clat percentiles (msec): 00:35:20.437 | 1.00th=[ 22], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:35:20.437 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 34], 00:35:20.437 | 70.00th=[ 36], 80.00th=[ 41], 90.00th=[ 45], 95.00th=[ 49], 00:35:20.437 | 99.00th=[ 57], 99.50th=[ 74], 99.90th=[ 188], 99.95th=[ 188], 00:35:20.437 | 99.99th=[ 188] 00:35:20.437 bw ( KiB/s): min= 1456, max= 2000, per=4.06%, avg=1814.80, stdev=136.51, samples=20 00:35:20.437 iops : min= 364, max= 500, avg=453.70, stdev=34.13, samples=20 00:35:20.437 lat (msec) : 20=0.66%, 50=95.50%, 100=3.49%, 250=0.35% 00:35:20.437 cpu : usr=97.18%, sys=1.88%, ctx=94, majf=0, minf=31 00:35:20.437 IO depths : 1=0.3%, 2=0.8%, 4=11.2%, 8=73.2%, 16=14.5%, 32=0.0%, >=64=0.0% 00:35:20.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.437 complete : 0=0.0%, 4=91.2%, 8=5.2%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.437 issued rwts: total=4554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.437 filename0: (groupid=0, jobs=1): err= 0: pid=1556587: Sat Jul 20 19:05:29 2024 00:35:20.437 read: IOPS=488, BW=1952KiB/s (1999kB/s)(19.4MiB/10162msec) 00:35:20.437 slat (usec): min=8, max=128, avg=31.63, stdev=18.27 00:35:20.437 clat (msec): min=14, max=190, avg=32.54, stdev=10.41 00:35:20.437 lat (msec): min=14, max=190, avg=32.57, stdev=10.41 00:35:20.437 clat percentiles (msec): 00:35:20.437 | 1.00th=[ 19], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:35:20.437 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:35:20.437 | 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 35], 95.00th=[ 41], 00:35:20.437 | 99.00th=[ 56], 99.50th=[ 85], 99.90th=[ 190], 99.95th=[ 190], 00:35:20.437 | 99.99th=[ 190] 00:35:20.437 bw ( KiB/s): min= 1664, max= 2048, per=4.42%, avg=1977.20, stdev=101.10, samples=20 00:35:20.437 iops : min= 416, max= 512, avg=494.30, stdev=25.28, samples=20 00:35:20.437 lat (msec) : 20=1.37%, 50=97.20%, 100=1.11%, 250=0.32% 00:35:20.437 cpu : usr=97.64%, sys=1.85%, ctx=24, majf=0, minf=43 00:35:20.437 IO depths : 1=3.1%, 2=7.2%, 4=21.6%, 8=58.1%, 16=10.0%, 32=0.0%, >=64=0.0% 00:35:20.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.437 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.437 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.437 filename0: (groupid=0, jobs=1): err= 0: pid=1556588: Sat Jul 20 19:05:29 2024 00:35:20.437 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.6MiB/10163msec) 00:35:20.437 slat (usec): min=8, max=111, avg=28.32, stdev=18.43 00:35:20.437 clat (msec): min=11, max=190, avg=38.07, stdev=12.55 00:35:20.437 lat (msec): min=11, max=190, avg=38.10, stdev=12.54 00:35:20.438 clat percentiles (msec): 00:35:20.438 | 1.00th=[ 26], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 32], 00:35:20.438 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 37], 00:35:20.438 | 70.00th=[ 41], 80.00th=[ 45], 90.00th=[ 50], 95.00th=[ 55], 00:35:20.438 | 99.00th=[ 84], 99.50th=[ 87], 99.90th=[ 190], 99.95th=[ 190], 00:35:20.438 | 99.99th=[ 190] 00:35:20.438 bw ( KiB/s): min= 1424, max= 1976, per=3.78%, avg=1689.25, stdev=147.12, samples=20 00:35:20.438 iops : min= 356, max= 494, avg=422.30, stdev=36.78, samples=20 00:35:20.438 lat (msec) : 20=0.40%, 50=90.47%, 100=8.75%, 250=0.38% 00:35:20.438 cpu : usr=97.95%, sys=1.54%, ctx=15, majf=0, minf=35 00:35:20.438 IO depths : 1=0.7%, 2=1.7%, 4=11.5%, 8=72.8%, 16=13.3%, 32=0.0%, >=64=0.0% 00:35:20.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 complete : 0=0.0%, 4=90.2%, 8=5.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.438 filename0: (groupid=0, jobs=1): err= 0: pid=1556589: Sat Jul 20 19:05:29 2024 00:35:20.438 read: IOPS=456, BW=1825KiB/s (1869kB/s)(18.1MiB/10147msec) 00:35:20.438 slat (usec): min=8, max=117, avg=28.41, stdev=19.50 00:35:20.438 clat (msec): min=7, max=246, avg=34.85, stdev=12.85 00:35:20.438 lat (msec): min=7, max=246, avg=34.88, stdev=12.85 00:35:20.438 clat percentiles (msec): 00:35:20.438 | 1.00th=[ 21], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:35:20.438 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:35:20.438 | 70.00th=[ 35], 80.00th=[ 40], 90.00th=[ 45], 95.00th=[ 47], 00:35:20.438 | 99.00th=[ 55], 99.50th=[ 68], 99.90th=[ 247], 99.95th=[ 247], 00:35:20.438 | 99.99th=[ 247] 00:35:20.438 bw ( KiB/s): min= 1504, max= 2056, per=4.13%, avg=1845.60, stdev=126.68, samples=20 00:35:20.438 iops : min= 376, max= 514, avg=461.40, stdev=31.67, samples=20 00:35:20.438 lat (msec) : 10=0.22%, 20=0.67%, 50=96.18%, 100=2.59%, 250=0.35% 00:35:20.438 cpu : usr=97.66%, sys=1.67%, ctx=63, majf=0, minf=34 00:35:20.438 IO depths : 1=0.5%, 2=1.3%, 4=12.4%, 8=72.0%, 16=13.9%, 32=0.0%, >=64=0.0% 00:35:20.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 complete : 0=0.0%, 4=91.6%, 8=4.4%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 issued rwts: total=4630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.438 filename1: (groupid=0, jobs=1): err= 0: pid=1556590: Sat Jul 20 19:05:29 2024 00:35:20.438 read: IOPS=431, BW=1728KiB/s (1769kB/s)(17.1MiB/10146msec) 00:35:20.438 slat (usec): min=7, max=147, avg=42.41, stdev=25.54 00:35:20.438 clat (msec): min=9, max=245, avg=36.75, stdev=13.50 00:35:20.438 lat (msec): min=9, max=245, avg=36.79, stdev=13.50 00:35:20.438 clat percentiles (msec): 00:35:20.438 | 1.00th=[ 20], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 32], 00:35:20.438 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 36], 00:35:20.438 | 70.00th=[ 40], 80.00th=[ 43], 90.00th=[ 47], 95.00th=[ 51], 00:35:20.438 | 99.00th=[ 62], 99.50th=[ 87], 99.90th=[ 247], 99.95th=[ 247], 00:35:20.438 | 99.99th=[ 247] 00:35:20.438 bw ( KiB/s): min= 1568, max= 2016, per=3.90%, avg=1746.80, stdev=115.15, samples=20 00:35:20.438 iops : min= 392, max= 504, avg=436.70, stdev=28.79, samples=20 00:35:20.438 lat (msec) : 10=0.02%, 20=1.32%, 50=93.66%, 100=4.63%, 250=0.37% 00:35:20.438 cpu : usr=97.01%, sys=2.04%, ctx=217, majf=0, minf=23 00:35:20.438 IO depths : 1=0.1%, 2=0.5%, 4=11.6%, 8=73.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:35:20.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 complete : 0=0.0%, 4=91.1%, 8=4.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 issued rwts: total=4383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.438 filename1: (groupid=0, jobs=1): err= 0: pid=1556591: Sat Jul 20 19:05:29 2024 00:35:20.438 read: IOPS=477, BW=1912KiB/s (1958kB/s)(19.0MiB/10162msec) 00:35:20.438 slat (usec): min=7, max=127, avg=28.85, stdev=19.43 00:35:20.438 clat (msec): min=9, max=189, avg=33.10, stdev= 9.91 00:35:20.438 lat (msec): min=9, max=189, avg=33.13, stdev= 9.91 00:35:20.438 clat percentiles (msec): 00:35:20.438 | 1.00th=[ 20], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:35:20.438 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 33], 00:35:20.438 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 40], 95.00th=[ 44], 00:35:20.438 | 99.00th=[ 55], 99.50th=[ 66], 99.90th=[ 190], 99.95th=[ 190], 00:35:20.438 | 99.99th=[ 190] 00:35:20.438 bw ( KiB/s): min= 1624, max= 2048, per=4.33%, avg=1935.95, stdev=108.98, samples=20 00:35:20.438 iops : min= 406, max= 512, avg=483.95, stdev=27.22, samples=20 00:35:20.438 lat (msec) : 10=0.08%, 20=1.34%, 50=96.97%, 100=1.28%, 250=0.33% 00:35:20.438 cpu : usr=97.38%, sys=2.17%, ctx=21, majf=0, minf=28 00:35:20.438 IO depths : 1=1.3%, 2=3.1%, 4=15.3%, 8=67.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:35:20.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 complete : 0=0.0%, 4=92.4%, 8=3.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 issued rwts: total=4857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.438 filename1: (groupid=0, jobs=1): err= 0: pid=1556592: Sat Jul 20 19:05:29 2024 00:35:20.438 read: IOPS=478, BW=1914KiB/s (1959kB/s)(19.0MiB/10157msec) 00:35:20.438 slat (usec): min=8, max=118, avg=40.65, stdev=25.06 00:35:20.438 clat (msec): min=12, max=188, avg=33.15, stdev=10.71 00:35:20.438 lat (msec): min=12, max=188, avg=33.19, stdev=10.71 00:35:20.438 clat percentiles (msec): 00:35:20.438 | 1.00th=[ 17], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 31], 00:35:20.438 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 33], 00:35:20.438 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 40], 95.00th=[ 44], 00:35:20.438 | 99.00th=[ 58], 99.50th=[ 86], 99.90th=[ 188], 99.95th=[ 188], 00:35:20.438 | 99.99th=[ 188] 00:35:20.438 bw ( KiB/s): min= 1648, max= 2080, per=4.33%, avg=1936.95, stdev=109.32, samples=20 00:35:20.438 iops : min= 412, max= 520, avg=484.20, stdev=27.29, samples=20 00:35:20.438 lat (msec) : 20=1.81%, 50=95.60%, 100=2.26%, 250=0.33% 00:35:20.438 cpu : usr=97.44%, sys=1.86%, ctx=94, majf=0, minf=28 00:35:20.438 IO depths : 1=0.8%, 2=3.0%, 4=16.8%, 8=66.9%, 16=12.5%, 32=0.0%, >=64=0.0% 00:35:20.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 complete : 0=0.0%, 4=93.0%, 8=2.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 issued rwts: total=4859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.438 filename1: (groupid=0, jobs=1): err= 0: pid=1556593: Sat Jul 20 19:05:29 2024 00:35:20.438 read: IOPS=462, BW=1852KiB/s (1896kB/s)(18.3MiB/10114msec) 00:35:20.438 slat (nsec): min=8088, max=89871, avg=26396.03, stdev=14470.40 00:35:20.438 clat (msec): min=12, max=245, avg=34.42, stdev=13.39 00:35:20.438 lat (msec): min=12, max=246, avg=34.45, stdev=13.39 00:35:20.438 clat percentiles (msec): 00:35:20.438 | 1.00th=[ 21], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:35:20.438 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 33], 00:35:20.438 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 43], 95.00th=[ 47], 00:35:20.438 | 99.00th=[ 53], 99.50th=[ 55], 99.90th=[ 247], 99.95th=[ 247], 00:35:20.438 | 99.99th=[ 247] 00:35:20.438 bw ( KiB/s): min= 1360, max= 2048, per=4.17%, avg=1866.40, stdev=174.73, samples=20 00:35:20.438 iops : min= 340, max= 512, avg=466.60, stdev=43.68, samples=20 00:35:20.438 lat (msec) : 20=0.85%, 50=96.75%, 100=2.05%, 250=0.34% 00:35:20.438 cpu : usr=97.67%, sys=1.78%, ctx=35, majf=0, minf=28 00:35:20.438 IO depths : 1=0.1%, 2=0.1%, 4=10.7%, 8=74.7%, 16=14.5%, 32=0.0%, >=64=0.0% 00:35:20.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 complete : 0=0.0%, 4=91.6%, 8=4.8%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 issued rwts: total=4682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.438 filename1: (groupid=0, jobs=1): err= 0: pid=1556594: Sat Jul 20 19:05:29 2024 00:35:20.438 read: IOPS=493, BW=1973KiB/s (2020kB/s)(19.6MiB/10179msec) 00:35:20.438 slat (usec): min=7, max=126, avg=33.43, stdev=20.63 00:35:20.438 clat (msec): min=9, max=189, avg=32.18, stdev=10.31 00:35:20.438 lat (msec): min=9, max=189, avg=32.21, stdev=10.31 00:35:20.438 clat percentiles (msec): 00:35:20.438 | 1.00th=[ 18], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:35:20.438 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:35:20.438 | 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 34], 95.00th=[ 39], 00:35:20.438 | 99.00th=[ 53], 99.50th=[ 85], 99.90th=[ 190], 99.95th=[ 190], 00:35:20.438 | 99.99th=[ 190] 00:35:20.438 bw ( KiB/s): min= 1792, max= 2167, per=4.47%, avg=2000.95, stdev=101.09, samples=20 00:35:20.438 iops : min= 448, max= 541, avg=500.20, stdev=25.21, samples=20 00:35:20.438 lat (msec) : 10=0.22%, 20=1.87%, 50=96.41%, 100=1.18%, 250=0.32% 00:35:20.438 cpu : usr=94.23%, sys=2.64%, ctx=137, majf=0, minf=27 00:35:20.438 IO depths : 1=4.9%, 2=10.2%, 4=23.0%, 8=54.0%, 16=7.9%, 32=0.0%, >=64=0.0% 00:35:20.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 issued rwts: total=5020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.438 filename1: (groupid=0, jobs=1): err= 0: pid=1556595: Sat Jul 20 19:05:29 2024 00:35:20.438 read: IOPS=486, BW=1944KiB/s (1991kB/s)(19.3MiB/10163msec) 00:35:20.438 slat (usec): min=8, max=208, avg=29.79, stdev=18.14 00:35:20.438 clat (msec): min=11, max=189, avg=32.69, stdev=10.36 00:35:20.438 lat (msec): min=11, max=189, avg=32.71, stdev=10.36 00:35:20.438 clat percentiles (msec): 00:35:20.438 | 1.00th=[ 20], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:35:20.438 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:35:20.438 | 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 36], 95.00th=[ 42], 00:35:20.438 | 99.00th=[ 61], 99.50th=[ 86], 99.90th=[ 190], 99.95th=[ 190], 00:35:20.438 | 99.99th=[ 190] 00:35:20.438 bw ( KiB/s): min= 1688, max= 2048, per=4.40%, avg=1969.15, stdev=93.96, samples=20 00:35:20.438 iops : min= 422, max= 512, avg=492.25, stdev=23.47, samples=20 00:35:20.438 lat (msec) : 20=1.05%, 50=96.94%, 100=1.68%, 250=0.32% 00:35:20.438 cpu : usr=94.51%, sys=2.78%, ctx=106, majf=0, minf=43 00:35:20.438 IO depths : 1=1.1%, 2=2.9%, 4=15.6%, 8=67.7%, 16=12.7%, 32=0.0%, >=64=0.0% 00:35:20.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 complete : 0=0.0%, 4=92.8%, 8=2.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.438 issued rwts: total=4940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.438 filename1: (groupid=0, jobs=1): err= 0: pid=1556596: Sat Jul 20 19:05:29 2024 00:35:20.438 read: IOPS=493, BW=1972KiB/s (2019kB/s)(19.6MiB/10182msec) 00:35:20.438 slat (usec): min=6, max=123, avg=32.64, stdev=20.12 00:35:20.438 clat (msec): min=9, max=201, avg=32.05, stdev= 9.97 00:35:20.439 lat (msec): min=9, max=201, avg=32.08, stdev= 9.97 00:35:20.439 clat percentiles (msec): 00:35:20.439 | 1.00th=[ 19], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:35:20.439 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:35:20.439 | 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 34], 95.00th=[ 39], 00:35:20.439 | 99.00th=[ 51], 99.50th=[ 56], 99.90th=[ 203], 99.95th=[ 203], 00:35:20.439 | 99.99th=[ 203] 00:35:20.439 bw ( KiB/s): min= 1840, max= 2123, per=4.47%, avg=2001.50, stdev=73.88, samples=20 00:35:20.439 iops : min= 460, max= 530, avg=500.30, stdev=18.45, samples=20 00:35:20.439 lat (msec) : 10=0.12%, 20=1.77%, 50=97.01%, 100=0.78%, 250=0.32% 00:35:20.439 cpu : usr=97.69%, sys=1.78%, ctx=47, majf=0, minf=23 00:35:20.439 IO depths : 1=3.4%, 2=7.8%, 4=21.7%, 8=57.5%, 16=9.6%, 32=0.0%, >=64=0.0% 00:35:20.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 complete : 0=0.0%, 4=93.9%, 8=0.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 issued rwts: total=5020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.439 filename1: (groupid=0, jobs=1): err= 0: pid=1556597: Sat Jul 20 19:05:29 2024 00:35:20.439 read: IOPS=486, BW=1946KiB/s (1992kB/s)(19.3MiB/10162msec) 00:35:20.439 slat (usec): min=8, max=127, avg=30.26, stdev=18.79 00:35:20.439 clat (msec): min=11, max=237, avg=32.68, stdev=11.31 00:35:20.439 lat (msec): min=11, max=237, avg=32.71, stdev=11.31 00:35:20.439 clat percentiles (msec): 00:35:20.439 | 1.00th=[ 17], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 31], 00:35:20.439 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:35:20.439 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 38], 95.00th=[ 43], 00:35:20.439 | 99.00th=[ 61], 99.50th=[ 86], 99.90th=[ 190], 99.95th=[ 239], 00:35:20.439 | 99.99th=[ 239] 00:35:20.439 bw ( KiB/s): min= 1632, max= 2144, per=4.41%, avg=1970.35, stdev=111.59, samples=20 00:35:20.439 iops : min= 408, max= 536, avg=492.55, stdev=27.89, samples=20 00:35:20.439 lat (msec) : 20=1.94%, 50=96.22%, 100=1.52%, 250=0.32% 00:35:20.439 cpu : usr=97.79%, sys=1.73%, ctx=31, majf=0, minf=28 00:35:20.439 IO depths : 1=1.1%, 2=2.3%, 4=13.9%, 8=69.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:35:20.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 complete : 0=0.0%, 4=92.5%, 8=3.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 issued rwts: total=4943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.439 filename2: (groupid=0, jobs=1): err= 0: pid=1556598: Sat Jul 20 19:05:29 2024 00:35:20.439 read: IOPS=477, BW=1908KiB/s (1954kB/s)(18.9MiB/10162msec) 00:35:20.439 slat (usec): min=8, max=137, avg=38.69, stdev=23.10 00:35:20.439 clat (msec): min=10, max=190, avg=33.22, stdev=10.73 00:35:20.439 lat (msec): min=10, max=190, avg=33.26, stdev=10.73 00:35:20.439 clat percentiles (msec): 00:35:20.439 | 1.00th=[ 20], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:35:20.439 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:35:20.439 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 39], 95.00th=[ 44], 00:35:20.439 | 99.00th=[ 57], 99.50th=[ 85], 99.90th=[ 190], 99.95th=[ 190], 00:35:20.439 | 99.99th=[ 190] 00:35:20.439 bw ( KiB/s): min= 1536, max= 2048, per=4.32%, avg=1932.50, stdev=126.43, samples=20 00:35:20.439 iops : min= 384, max= 512, avg=483.05, stdev=31.63, samples=20 00:35:20.439 lat (msec) : 20=1.38%, 50=96.45%, 100=1.84%, 250=0.33% 00:35:20.439 cpu : usr=96.81%, sys=1.81%, ctx=48, majf=0, minf=29 00:35:20.439 IO depths : 1=4.3%, 2=9.6%, 4=23.6%, 8=54.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:35:20.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.439 filename2: (groupid=0, jobs=1): err= 0: pid=1556599: Sat Jul 20 19:05:29 2024 00:35:20.439 read: IOPS=464, BW=1856KiB/s (1901kB/s)(18.4MiB/10156msec) 00:35:20.439 slat (usec): min=8, max=110, avg=30.96, stdev=19.60 00:35:20.439 clat (msec): min=13, max=188, avg=34.24, stdev=11.00 00:35:20.439 lat (msec): min=13, max=188, avg=34.27, stdev=10.99 00:35:20.439 clat percentiles (msec): 00:35:20.439 | 1.00th=[ 22], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:35:20.439 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 33], 00:35:20.439 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 43], 95.00th=[ 46], 00:35:20.439 | 99.00th=[ 57], 99.50th=[ 86], 99.90th=[ 190], 99.95th=[ 190], 00:35:20.439 | 99.99th=[ 190] 00:35:20.439 bw ( KiB/s): min= 1480, max= 1992, per=4.20%, avg=1878.60, stdev=128.57, samples=20 00:35:20.439 iops : min= 370, max= 498, avg=469.65, stdev=32.14, samples=20 00:35:20.439 lat (msec) : 20=0.79%, 50=96.67%, 100=2.21%, 250=0.34% 00:35:20.439 cpu : usr=97.93%, sys=1.63%, ctx=24, majf=0, minf=34 00:35:20.439 IO depths : 1=0.5%, 2=1.1%, 4=12.0%, 8=72.4%, 16=14.1%, 32=0.0%, >=64=0.0% 00:35:20.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 complete : 0=0.0%, 4=91.8%, 8=4.5%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 issued rwts: total=4713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.439 filename2: (groupid=0, jobs=1): err= 0: pid=1556600: Sat Jul 20 19:05:29 2024 00:35:20.439 read: IOPS=434, BW=1736KiB/s (1778kB/s)(17.2MiB/10152msec) 00:35:20.439 slat (usec): min=8, max=137, avg=41.91, stdev=25.64 00:35:20.439 clat (msec): min=12, max=254, avg=36.61, stdev=14.01 00:35:20.439 lat (msec): min=12, max=254, avg=36.65, stdev=14.01 00:35:20.439 clat percentiles (msec): 00:35:20.439 | 1.00th=[ 24], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 32], 00:35:20.439 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 35], 00:35:20.439 | 70.00th=[ 39], 80.00th=[ 42], 90.00th=[ 46], 95.00th=[ 53], 00:35:20.439 | 99.00th=[ 63], 99.50th=[ 74], 99.90th=[ 255], 99.95th=[ 255], 00:35:20.439 | 99.99th=[ 255] 00:35:20.439 bw ( KiB/s): min= 1384, max= 1936, per=3.93%, avg=1756.40, stdev=132.18, samples=20 00:35:20.439 iops : min= 346, max= 484, avg=439.10, stdev=33.05, samples=20 00:35:20.439 lat (msec) : 20=0.36%, 50=92.74%, 100=6.54%, 250=0.25%, 500=0.11% 00:35:20.439 cpu : usr=97.71%, sys=1.80%, ctx=21, majf=0, minf=25 00:35:20.439 IO depths : 1=0.1%, 2=0.4%, 4=10.3%, 8=75.1%, 16=14.2%, 32=0.0%, >=64=0.0% 00:35:20.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 complete : 0=0.0%, 4=90.5%, 8=5.5%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 issued rwts: total=4407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.439 filename2: (groupid=0, jobs=1): err= 0: pid=1556601: Sat Jul 20 19:05:29 2024 00:35:20.439 read: IOPS=432, BW=1729KiB/s (1771kB/s)(17.1MiB/10140msec) 00:35:20.439 slat (usec): min=8, max=118, avg=37.33, stdev=25.13 00:35:20.439 clat (msec): min=9, max=207, avg=36.62, stdev=11.62 00:35:20.439 lat (msec): min=9, max=207, avg=36.65, stdev=11.62 00:35:20.439 clat percentiles (msec): 00:35:20.439 | 1.00th=[ 21], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:35:20.439 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 36], 00:35:20.439 | 70.00th=[ 40], 80.00th=[ 43], 90.00th=[ 47], 95.00th=[ 50], 00:35:20.439 | 99.00th=[ 60], 99.50th=[ 71], 99.90th=[ 207], 99.95th=[ 207], 00:35:20.439 | 99.99th=[ 207] 00:35:20.439 bw ( KiB/s): min= 1424, max= 1920, per=3.91%, avg=1747.20, stdev=118.61, samples=20 00:35:20.439 iops : min= 356, max= 480, avg=436.80, stdev=29.65, samples=20 00:35:20.439 lat (msec) : 10=0.09%, 20=0.66%, 50=94.41%, 100=4.47%, 250=0.36% 00:35:20.439 cpu : usr=97.97%, sys=1.60%, ctx=17, majf=0, minf=41 00:35:20.439 IO depths : 1=0.1%, 2=0.9%, 4=10.0%, 8=74.1%, 16=14.9%, 32=0.0%, >=64=0.0% 00:35:20.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 complete : 0=0.0%, 4=90.8%, 8=5.7%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.439 filename2: (groupid=0, jobs=1): err= 0: pid=1556602: Sat Jul 20 19:05:29 2024 00:35:20.439 read: IOPS=442, BW=1772KiB/s (1814kB/s)(17.5MiB/10141msec) 00:35:20.439 slat (usec): min=8, max=1557, avg=25.94, stdev=32.22 00:35:20.439 clat (msec): min=10, max=248, avg=35.95, stdev=12.56 00:35:20.439 lat (msec): min=10, max=248, avg=35.98, stdev=12.56 00:35:20.439 clat percentiles (msec): 00:35:20.439 | 1.00th=[ 22], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 31], 00:35:20.439 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 34], 00:35:20.439 | 70.00th=[ 37], 80.00th=[ 42], 90.00th=[ 46], 95.00th=[ 50], 00:35:20.439 | 99.00th=[ 62], 99.50th=[ 88], 99.90th=[ 188], 99.95th=[ 247], 00:35:20.439 | 99.99th=[ 249] 00:35:20.439 bw ( KiB/s): min= 1480, max= 2016, per=4.00%, avg=1790.40, stdev=131.98, samples=20 00:35:20.439 iops : min= 370, max= 504, avg=447.60, stdev=33.00, samples=20 00:35:20.439 lat (msec) : 20=0.62%, 50=94.70%, 100=4.32%, 250=0.36% 00:35:20.439 cpu : usr=92.47%, sys=3.66%, ctx=215, majf=0, minf=32 00:35:20.439 IO depths : 1=0.4%, 2=1.0%, 4=12.2%, 8=72.0%, 16=14.4%, 32=0.0%, >=64=0.0% 00:35:20.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 complete : 0=0.0%, 4=91.5%, 8=4.8%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 issued rwts: total=4492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.439 filename2: (groupid=0, jobs=1): err= 0: pid=1556603: Sat Jul 20 19:05:29 2024 00:35:20.439 read: IOPS=485, BW=1944KiB/s (1991kB/s)(19.2MiB/10134msec) 00:35:20.439 slat (usec): min=8, max=785, avg=39.27, stdev=30.78 00:35:20.439 clat (msec): min=11, max=243, avg=32.65, stdev=13.31 00:35:20.439 lat (msec): min=11, max=243, avg=32.69, stdev=13.31 00:35:20.439 clat percentiles (msec): 00:35:20.439 | 1.00th=[ 16], 5.00th=[ 23], 10.00th=[ 28], 20.00th=[ 31], 00:35:20.439 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:35:20.439 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 39], 95.00th=[ 44], 00:35:20.439 | 99.00th=[ 55], 99.50th=[ 63], 99.90th=[ 245], 99.95th=[ 245], 00:35:20.439 | 99.99th=[ 245] 00:35:20.439 bw ( KiB/s): min= 1616, max= 2096, per=4.39%, avg=1963.35, stdev=110.41, samples=20 00:35:20.439 iops : min= 404, max= 524, avg=490.80, stdev=27.66, samples=20 00:35:20.439 lat (msec) : 20=3.15%, 50=95.19%, 100=1.34%, 250=0.32% 00:35:20.439 cpu : usr=89.43%, sys=4.22%, ctx=265, majf=0, minf=24 00:35:20.439 IO depths : 1=1.9%, 2=4.2%, 4=17.7%, 8=64.8%, 16=11.3%, 32=0.0%, >=64=0.0% 00:35:20.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 complete : 0=0.0%, 4=93.1%, 8=1.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.439 issued rwts: total=4925,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.439 filename2: (groupid=0, jobs=1): err= 0: pid=1556604: Sat Jul 20 19:05:29 2024 00:35:20.439 read: IOPS=496, BW=1985KiB/s (2033kB/s)(19.7MiB/10183msec) 00:35:20.439 slat (usec): min=7, max=108, avg=27.46, stdev=18.35 00:35:20.439 clat (msec): min=11, max=191, avg=32.01, stdev=10.22 00:35:20.439 lat (msec): min=11, max=191, avg=32.04, stdev=10.22 00:35:20.440 clat percentiles (msec): 00:35:20.440 | 1.00th=[ 18], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 31], 00:35:20.440 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:35:20.440 | 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 34], 95.00th=[ 39], 00:35:20.440 | 99.00th=[ 50], 99.50th=[ 85], 99.90th=[ 190], 99.95th=[ 190], 00:35:20.440 | 99.99th=[ 192] 00:35:20.440 bw ( KiB/s): min= 1792, max= 2064, per=4.51%, avg=2015.10, stdev=71.70, samples=20 00:35:20.440 iops : min= 448, max= 516, avg=503.70, stdev=17.96, samples=20 00:35:20.440 lat (msec) : 20=2.06%, 50=96.95%, 100=0.67%, 250=0.32% 00:35:20.440 cpu : usr=97.86%, sys=1.59%, ctx=71, majf=0, minf=33 00:35:20.440 IO depths : 1=4.9%, 2=10.3%, 4=23.4%, 8=53.5%, 16=7.9%, 32=0.0%, >=64=0.0% 00:35:20.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.440 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.440 issued rwts: total=5054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.440 filename2: (groupid=0, jobs=1): err= 0: pid=1556605: Sat Jul 20 19:05:29 2024 00:35:20.440 read: IOPS=481, BW=1926KiB/s (1973kB/s)(19.1MiB/10162msec) 00:35:20.440 slat (usec): min=8, max=117, avg=32.00, stdev=20.84 00:35:20.440 clat (msec): min=9, max=244, avg=32.95, stdev=12.71 00:35:20.440 lat (msec): min=9, max=244, avg=32.98, stdev=12.71 00:35:20.440 clat percentiles (msec): 00:35:20.440 | 1.00th=[ 16], 5.00th=[ 25], 10.00th=[ 29], 20.00th=[ 31], 00:35:20.440 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:35:20.440 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 40], 95.00th=[ 45], 00:35:20.440 | 99.00th=[ 54], 99.50th=[ 63], 99.90th=[ 245], 99.95th=[ 245], 00:35:20.440 | 99.99th=[ 245] 00:35:20.440 bw ( KiB/s): min= 1667, max= 2087, per=4.36%, avg=1950.90, stdev=106.41, samples=20 00:35:20.440 iops : min= 416, max= 521, avg=487.65, stdev=26.66, samples=20 00:35:20.440 lat (msec) : 10=0.08%, 20=2.88%, 50=95.32%, 100=1.39%, 250=0.33% 00:35:20.440 cpu : usr=98.09%, sys=1.48%, ctx=31, majf=0, minf=29 00:35:20.440 IO depths : 1=1.9%, 2=4.0%, 4=15.3%, 8=66.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:35:20.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.440 complete : 0=0.0%, 4=92.7%, 8=3.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.440 issued rwts: total=4894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:20.440 00:35:20.440 Run status group 0 (all jobs): 00:35:20.440 READ: bw=43.7MiB/s (45.8MB/s), 1669KiB/s-1985KiB/s (1709kB/s-2033kB/s), io=445MiB (466MB), run=10114-10183msec 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.440 bdev_null0 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.440 [2024-07-20 19:05:29.579169] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.440 bdev_null1 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:20.440 19:05:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:20.440 { 00:35:20.440 "params": { 00:35:20.440 "name": "Nvme$subsystem", 00:35:20.440 "trtype": "$TEST_TRANSPORT", 00:35:20.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:20.440 "adrfam": "ipv4", 00:35:20.440 "trsvcid": "$NVMF_PORT", 00:35:20.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:20.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:20.440 "hdgst": ${hdgst:-false}, 00:35:20.440 "ddgst": ${ddgst:-false} 00:35:20.440 }, 00:35:20.440 "method": "bdev_nvme_attach_controller" 00:35:20.440 } 00:35:20.440 EOF 00:35:20.440 )") 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:20.441 { 00:35:20.441 "params": { 00:35:20.441 "name": "Nvme$subsystem", 00:35:20.441 "trtype": "$TEST_TRANSPORT", 00:35:20.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:20.441 "adrfam": "ipv4", 00:35:20.441 "trsvcid": "$NVMF_PORT", 00:35:20.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:20.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:20.441 "hdgst": ${hdgst:-false}, 00:35:20.441 "ddgst": ${ddgst:-false} 00:35:20.441 }, 00:35:20.441 "method": "bdev_nvme_attach_controller" 00:35:20.441 } 00:35:20.441 EOF 00:35:20.441 )") 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:20.441 "params": { 00:35:20.441 "name": "Nvme0", 00:35:20.441 "trtype": "tcp", 00:35:20.441 "traddr": "10.0.0.2", 00:35:20.441 "adrfam": "ipv4", 00:35:20.441 "trsvcid": "4420", 00:35:20.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:20.441 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:20.441 "hdgst": false, 00:35:20.441 "ddgst": false 00:35:20.441 }, 00:35:20.441 "method": "bdev_nvme_attach_controller" 00:35:20.441 },{ 00:35:20.441 "params": { 00:35:20.441 "name": "Nvme1", 00:35:20.441 "trtype": "tcp", 00:35:20.441 "traddr": "10.0.0.2", 00:35:20.441 "adrfam": "ipv4", 00:35:20.441 "trsvcid": "4420", 00:35:20.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:20.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:20.441 "hdgst": false, 00:35:20.441 "ddgst": false 00:35:20.441 }, 00:35:20.441 "method": "bdev_nvme_attach_controller" 00:35:20.441 }' 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:20.441 19:05:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.441 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:20.441 ... 00:35:20.441 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:20.441 ... 00:35:20.441 fio-3.35 00:35:20.441 Starting 4 threads 00:35:20.441 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.705 00:35:25.705 filename0: (groupid=0, jobs=1): err= 0: pid=1557982: Sat Jul 20 19:05:35 2024 00:35:25.705 read: IOPS=1699, BW=13.3MiB/s (13.9MB/s)(66.4MiB/5002msec) 00:35:25.705 slat (nsec): min=6854, max=50011, avg=12860.54, stdev=5881.13 00:35:25.705 clat (usec): min=1718, max=8227, avg=4669.03, stdev=861.31 00:35:25.705 lat (usec): min=1727, max=8249, avg=4681.89, stdev=861.18 00:35:25.705 clat percentiles (usec): 00:35:25.705 | 1.00th=[ 3032], 5.00th=[ 3490], 10.00th=[ 3720], 20.00th=[ 4015], 00:35:25.705 | 30.00th=[ 4178], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4686], 00:35:25.705 | 70.00th=[ 4948], 80.00th=[ 5407], 90.00th=[ 5932], 95.00th=[ 6325], 00:35:25.705 | 99.00th=[ 6980], 99.50th=[ 7373], 99.90th=[ 7767], 99.95th=[ 7767], 00:35:25.705 | 99.99th=[ 8225] 00:35:25.705 bw ( KiB/s): min=12912, max=14028, per=23.20%, avg=13591.60, stdev=330.36, samples=10 00:35:25.705 iops : min= 1614, max= 1753, avg=1698.90, stdev=41.22, samples=10 00:35:25.705 lat (msec) : 2=0.07%, 4=19.47%, 10=80.46% 00:35:25.705 cpu : usr=94.64%, sys=4.50%, ctx=8, majf=0, minf=9 00:35:25.705 IO depths : 1=0.3%, 2=2.5%, 4=67.5%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.705 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.705 issued rwts: total=8499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.705 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:25.705 filename0: (groupid=0, jobs=1): err= 0: pid=1557983: Sat Jul 20 19:05:35 2024 00:35:25.705 read: IOPS=1907, BW=14.9MiB/s (15.6MB/s)(74.5MiB/5002msec) 00:35:25.705 slat (nsec): min=6812, max=47388, avg=11236.83, stdev=4489.12 00:35:25.705 clat (usec): min=1908, max=47234, avg=4159.31, stdev=1383.99 00:35:25.705 lat (usec): min=1917, max=47255, avg=4170.55, stdev=1383.93 00:35:25.705 clat percentiles (usec): 00:35:25.705 | 1.00th=[ 2835], 5.00th=[ 3195], 10.00th=[ 3392], 20.00th=[ 3621], 00:35:25.706 | 30.00th=[ 3818], 40.00th=[ 3949], 50.00th=[ 4113], 60.00th=[ 4228], 00:35:25.706 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 4883], 95.00th=[ 5211], 00:35:25.706 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 6915], 99.95th=[47449], 00:35:25.706 | 99.99th=[47449] 00:35:25.706 bw ( KiB/s): min=14272, max=16032, per=25.92%, avg=15187.56, stdev=643.98, samples=9 00:35:25.706 iops : min= 1784, max= 2004, avg=1898.44, stdev=80.50, samples=9 00:35:25.706 lat (msec) : 2=0.05%, 4=42.57%, 10=57.29%, 50=0.08% 00:35:25.706 cpu : usr=94.70%, sys=4.82%, ctx=9, majf=0, minf=0 00:35:25.706 IO depths : 1=0.3%, 2=2.8%, 4=67.7%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.706 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.706 issued rwts: total=9542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.706 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:25.706 filename1: (groupid=0, jobs=1): err= 0: pid=1557984: Sat Jul 20 19:05:35 2024 00:35:25.706 read: IOPS=1921, BW=15.0MiB/s (15.7MB/s)(75.1MiB/5002msec) 00:35:25.706 slat (nsec): min=7392, max=42390, avg=11508.35, stdev=4527.48 00:35:25.706 clat (usec): min=1669, max=45530, avg=4129.93, stdev=1345.34 00:35:25.706 lat (usec): min=1678, max=45572, avg=4141.44, stdev=1345.39 00:35:25.706 clat percentiles (usec): 00:35:25.706 | 1.00th=[ 2737], 5.00th=[ 3163], 10.00th=[ 3359], 20.00th=[ 3589], 00:35:25.706 | 30.00th=[ 3785], 40.00th=[ 3916], 50.00th=[ 4080], 60.00th=[ 4228], 00:35:25.706 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 4883], 95.00th=[ 5211], 00:35:25.706 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 7308], 99.95th=[45351], 00:35:25.706 | 99.99th=[45351] 00:35:25.706 bw ( KiB/s): min=14304, max=15872, per=26.08%, avg=15280.00, stdev=575.94, samples=9 00:35:25.706 iops : min= 1788, max= 1984, avg=1910.00, stdev=71.99, samples=9 00:35:25.706 lat (msec) : 2=0.04%, 4=43.95%, 10=55.93%, 50=0.08% 00:35:25.706 cpu : usr=94.38%, sys=5.14%, ctx=8, majf=0, minf=0 00:35:25.706 IO depths : 1=0.2%, 2=1.8%, 4=67.9%, 8=30.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.706 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.706 issued rwts: total=9612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.706 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:25.706 filename1: (groupid=0, jobs=1): err= 0: pid=1557985: Sat Jul 20 19:05:35 2024 00:35:25.706 read: IOPS=1795, BW=14.0MiB/s (14.7MB/s)(70.2MiB/5001msec) 00:35:25.706 slat (nsec): min=6957, max=42717, avg=10591.35, stdev=4106.42 00:35:25.706 clat (usec): min=2410, max=7934, avg=4422.56, stdev=775.08 00:35:25.706 lat (usec): min=2417, max=7941, avg=4433.15, stdev=774.96 00:35:25.706 clat percentiles (usec): 00:35:25.706 | 1.00th=[ 2868], 5.00th=[ 3326], 10.00th=[ 3556], 20.00th=[ 3851], 00:35:25.706 | 30.00th=[ 4015], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4490], 00:35:25.706 | 70.00th=[ 4621], 80.00th=[ 4948], 90.00th=[ 5473], 95.00th=[ 5932], 00:35:25.706 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7504], 99.95th=[ 7635], 00:35:25.706 | 99.99th=[ 7963] 00:35:25.706 bw ( KiB/s): min=13536, max=14816, per=24.45%, avg=14323.56, stdev=419.66, samples=9 00:35:25.706 iops : min= 1692, max= 1852, avg=1790.44, stdev=52.46, samples=9 00:35:25.706 lat (msec) : 4=28.41%, 10=71.59% 00:35:25.706 cpu : usr=95.20%, sys=4.16%, ctx=9, majf=0, minf=11 00:35:25.706 IO depths : 1=0.4%, 2=2.3%, 4=68.7%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.706 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.706 issued rwts: total=8980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.706 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:25.706 00:35:25.706 Run status group 0 (all jobs): 00:35:25.706 READ: bw=57.2MiB/s (60.0MB/s), 13.3MiB/s-15.0MiB/s (13.9MB/s-15.7MB/s), io=286MiB (300MB), run=5001-5002msec 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.706 00:35:25.706 real 0m24.308s 00:35:25.706 user 4m31.764s 00:35:25.706 sys 0m8.600s 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:25.706 19:05:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.706 ************************************ 00:35:25.706 END TEST fio_dif_rand_params 00:35:25.706 ************************************ 00:35:25.706 19:05:35 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:25.706 19:05:35 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:25.706 19:05:35 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:25.706 19:05:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.706 ************************************ 00:35:25.706 START TEST fio_dif_digest 00:35:25.706 ************************************ 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:25.706 bdev_null0 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:25.706 [2024-07-20 19:05:35.867623] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:25.706 { 00:35:25.706 "params": { 00:35:25.706 "name": "Nvme$subsystem", 00:35:25.706 "trtype": "$TEST_TRANSPORT", 00:35:25.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:25.706 "adrfam": "ipv4", 00:35:25.706 "trsvcid": "$NVMF_PORT", 00:35:25.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:25.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:25.706 "hdgst": ${hdgst:-false}, 00:35:25.706 "ddgst": ${ddgst:-false} 00:35:25.706 }, 00:35:25.706 "method": "bdev_nvme_attach_controller" 00:35:25.706 } 00:35:25.706 EOF 00:35:25.706 )") 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:25.706 19:05:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:25.707 "params": { 00:35:25.707 "name": "Nvme0", 00:35:25.707 "trtype": "tcp", 00:35:25.707 "traddr": "10.0.0.2", 00:35:25.707 "adrfam": "ipv4", 00:35:25.707 "trsvcid": "4420", 00:35:25.707 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:25.707 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:25.707 "hdgst": true, 00:35:25.707 "ddgst": true 00:35:25.707 }, 00:35:25.707 "method": "bdev_nvme_attach_controller" 00:35:25.707 }' 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:25.707 19:05:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.997 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:25.997 ... 00:35:25.997 fio-3.35 00:35:25.997 Starting 3 threads 00:35:25.997 EAL: No free 2048 kB hugepages reported on node 1 00:35:38.187 00:35:38.187 filename0: (groupid=0, jobs=1): err= 0: pid=1558747: Sat Jul 20 19:05:46 2024 00:35:38.187 read: IOPS=158, BW=19.8MiB/s (20.8MB/s)(199MiB/10050msec) 00:35:38.187 slat (nsec): min=4581, max=47752, avg=18500.91, stdev=5288.43 00:35:38.187 clat (usec): min=8251, max=98265, avg=18873.98, stdev=14678.77 00:35:38.187 lat (usec): min=8264, max=98285, avg=18892.48, stdev=14678.61 00:35:38.187 clat percentiles (usec): 00:35:38.187 | 1.00th=[ 8586], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11600], 00:35:38.187 | 30.00th=[12387], 40.00th=[13435], 50.00th=[14222], 60.00th=[14746], 00:35:38.187 | 70.00th=[15270], 80.00th=[16057], 90.00th=[52691], 95.00th=[54789], 00:35:38.187 | 99.00th=[57934], 99.50th=[93848], 99.90th=[98042], 99.95th=[98042], 00:35:38.187 | 99.99th=[98042] 00:35:38.187 bw ( KiB/s): min=14592, max=24832, per=32.64%, avg=20362.80, stdev=3001.27, samples=20 00:35:38.187 iops : min= 114, max= 194, avg=159.05, stdev=23.45, samples=20 00:35:38.187 lat (msec) : 10=6.28%, 20=80.73%, 50=0.19%, 100=12.81% 00:35:38.187 cpu : usr=92.28%, sys=5.71%, ctx=550, majf=0, minf=55 00:35:38.187 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:38.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.187 issued rwts: total=1593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.187 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:38.187 filename0: (groupid=0, jobs=1): err= 0: pid=1558748: Sat Jul 20 19:05:46 2024 00:35:38.187 read: IOPS=164, BW=20.6MiB/s (21.6MB/s)(207MiB/10046msec) 00:35:38.187 slat (nsec): min=5217, max=53949, avg=16564.29, stdev=5163.62 00:35:38.187 clat (usec): min=8457, max=92557, avg=18185.30, stdev=13731.44 00:35:38.187 lat (usec): min=8470, max=92577, avg=18201.86, stdev=13731.51 00:35:38.187 clat percentiles (usec): 00:35:38.187 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11207], 00:35:38.187 | 30.00th=[11994], 40.00th=[13042], 50.00th=[13829], 60.00th=[14484], 00:35:38.187 | 70.00th=[15008], 80.00th=[15664], 90.00th=[52167], 95.00th=[54264], 00:35:38.187 | 99.00th=[56886], 99.50th=[57410], 99.90th=[91751], 99.95th=[92799], 00:35:38.187 | 99.99th=[92799] 00:35:38.187 bw ( KiB/s): min=15872, max=26368, per=33.85%, avg=21120.00, stdev=2572.77, samples=20 00:35:38.187 iops : min= 124, max= 206, avg=165.00, stdev=20.10, samples=20 00:35:38.187 lat (msec) : 10=6.11%, 20=81.49%, 50=0.12%, 100=12.28% 00:35:38.187 cpu : usr=94.32%, sys=5.16%, ctx=23, majf=0, minf=191 00:35:38.187 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:38.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.187 issued rwts: total=1653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.187 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:38.187 filename0: (groupid=0, jobs=1): err= 0: pid=1558749: Sat Jul 20 19:05:46 2024 00:35:38.187 read: IOPS=164, BW=20.6MiB/s (21.6MB/s)(207MiB/10046msec) 00:35:38.187 slat (nsec): min=4971, max=66929, avg=15907.11, stdev=4434.57 00:35:38.187 clat (usec): min=7907, max=96831, avg=18196.48, stdev=14323.08 00:35:38.187 lat (usec): min=7927, max=96850, avg=18212.38, stdev=14323.07 00:35:38.187 clat percentiles (usec): 00:35:38.187 | 1.00th=[ 8356], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[11338], 00:35:38.187 | 30.00th=[12125], 40.00th=[13173], 50.00th=[13960], 60.00th=[14484], 00:35:38.187 | 70.00th=[15139], 80.00th=[15926], 90.00th=[52167], 95.00th=[54789], 00:35:38.187 | 99.00th=[57410], 99.50th=[93848], 99.90th=[96994], 99.95th=[96994], 00:35:38.187 | 99.99th=[96994] 00:35:38.187 bw ( KiB/s): min=12312, max=28160, per=33.86%, avg=21121.20, stdev=3606.33, samples=20 00:35:38.187 iops : min= 96, max= 220, avg=165.00, stdev=28.20, samples=20 00:35:38.187 lat (msec) : 10=7.20%, 20=81.36%, 50=0.06%, 100=11.38% 00:35:38.187 cpu : usr=94.89%, sys=4.60%, ctx=20, majf=0, minf=151 00:35:38.187 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:38.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.187 issued rwts: total=1652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.187 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:38.187 00:35:38.187 Run status group 0 (all jobs): 00:35:38.187 READ: bw=60.9MiB/s (63.9MB/s), 19.8MiB/s-20.6MiB/s (20.8MB/s-21.6MB/s), io=612MiB (642MB), run=10046-10050msec 00:35:38.187 19:05:46 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:38.187 19:05:46 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:38.187 19:05:46 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:38.187 19:05:46 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:38.187 19:05:46 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:38.187 19:05:46 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:38.187 19:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.187 19:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:38.187 19:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.187 19:05:46 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:38.187 19:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.187 19:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:38.187 19:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.187 00:35:38.187 real 0m11.099s 00:35:38.187 user 0m29.375s 00:35:38.187 sys 0m1.861s 00:35:38.187 19:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:38.187 19:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:38.187 ************************************ 00:35:38.187 END TEST fio_dif_digest 00:35:38.187 ************************************ 00:35:38.187 19:05:46 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:38.187 19:05:46 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:38.187 19:05:46 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:38.187 19:05:46 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:38.187 19:05:46 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:38.187 19:05:46 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:38.187 19:05:46 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:38.187 19:05:46 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:38.187 rmmod nvme_tcp 00:35:38.187 rmmod nvme_fabrics 00:35:38.187 rmmod nvme_keyring 00:35:38.187 19:05:47 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:38.187 19:05:47 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:38.187 19:05:47 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:38.187 19:05:47 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1552698 ']' 00:35:38.187 19:05:47 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1552698 00:35:38.187 19:05:47 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 1552698 ']' 00:35:38.187 19:05:47 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 1552698 00:35:38.187 19:05:47 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:35:38.187 19:05:47 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:38.187 19:05:47 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1552698 00:35:38.187 19:05:47 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:38.187 19:05:47 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:38.187 19:05:47 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1552698' 00:35:38.187 killing process with pid 1552698 00:35:38.187 19:05:47 nvmf_dif -- common/autotest_common.sh@965 -- # kill 1552698 00:35:38.187 19:05:47 nvmf_dif -- common/autotest_common.sh@970 -- # wait 1552698 00:35:38.187 19:05:47 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:38.187 19:05:47 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:38.187 Waiting for block devices as requested 00:35:38.187 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:38.187 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:38.447 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:38.447 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:38.447 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:38.447 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:38.705 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:38.705 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:38.705 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:38.705 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:38.962 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:38.962 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:38.962 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:38.962 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:39.220 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:39.220 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:39.220 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:39.479 19:05:49 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:39.479 19:05:49 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:39.479 19:05:49 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:39.479 19:05:49 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:39.479 19:05:49 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.479 19:05:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:39.479 19:05:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:41.379 19:05:51 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:41.379 00:35:41.379 real 1m6.458s 00:35:41.379 user 6m28.205s 00:35:41.379 sys 0m19.863s 00:35:41.379 19:05:51 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:41.379 19:05:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:41.379 ************************************ 00:35:41.379 END TEST nvmf_dif 00:35:41.379 ************************************ 00:35:41.379 19:05:51 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:41.379 19:05:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:41.379 19:05:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:41.379 19:05:51 -- common/autotest_common.sh@10 -- # set +x 00:35:41.379 ************************************ 00:35:41.379 START TEST nvmf_abort_qd_sizes 00:35:41.379 ************************************ 00:35:41.379 19:05:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:41.379 * Looking for test storage... 00:35:41.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:41.379 19:05:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:41.379 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:41.636 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:35:41.637 19:05:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:43.532 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:43.532 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:43.532 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:43.532 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:43.533 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:43.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:43.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:35:43.533 00:35:43.533 --- 10.0.0.2 ping statistics --- 00:35:43.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.533 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:43.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:43.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:35:43.533 00:35:43.533 --- 10.0.0.1 ping statistics --- 00:35:43.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.533 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:43.533 19:05:53 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:44.903 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:44.903 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:44.903 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:44.903 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:44.903 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:44.903 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:44.903 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:44.903 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:44.903 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:44.903 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:44.903 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:44.903 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:44.903 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:44.903 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:44.903 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:44.903 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:45.834 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1563531 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1563531 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 1563531 ']' 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:45.834 19:05:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:45.834 [2024-07-20 19:05:56.011874] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:45.834 [2024-07-20 19:05:56.011946] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.834 EAL: No free 2048 kB hugepages reported on node 1 00:35:45.834 [2024-07-20 19:05:56.085641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:46.091 [2024-07-20 19:05:56.178340] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:46.091 [2024-07-20 19:05:56.178394] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:46.091 [2024-07-20 19:05:56.178421] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:46.091 [2024-07-20 19:05:56.178435] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:46.091 [2024-07-20 19:05:56.178446] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:46.091 [2024-07-20 19:05:56.178541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.091 [2024-07-20 19:05:56.178622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:46.091 [2024-07-20 19:05:56.178702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:46.091 [2024-07-20 19:05:56.178704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.091 19:05:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:46.091 19:05:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:35:46.091 19:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:46.091 19:05:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:46.091 19:05:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:46.091 19:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:46.091 19:05:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:46.091 19:05:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:46.091 19:05:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:46.091 19:05:56 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:46.091 19:05:56 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:46.091 19:05:56 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:35:46.092 19:05:56 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:46.092 19:05:56 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:46.092 19:05:56 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:35:46.092 19:05:56 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:46.092 19:05:56 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:46.092 19:05:56 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:46.092 19:05:56 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:46.092 19:05:56 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:35:46.092 19:05:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:46.092 19:05:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:35:46.092 19:05:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:46.092 19:05:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:46.092 19:05:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:46.092 19:05:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:46.092 ************************************ 00:35:46.092 START TEST spdk_target_abort 00:35:46.092 ************************************ 00:35:46.092 19:05:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:35:46.092 19:05:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:46.092 19:05:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:35:46.092 19:05:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.092 19:05:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.365 spdk_targetn1 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.365 [2024-07-20 19:05:59.207169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.365 [2024-07-20 19:05:59.239455] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:49.365 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:49.366 19:05:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:49.366 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.682 Initializing NVMe Controllers 00:35:52.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:52.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:52.682 Initialization complete. Launching workers. 00:35:52.682 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 7522, failed: 0 00:35:52.682 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1437, failed to submit 6085 00:35:52.682 success 864, unsuccess 573, failed 0 00:35:52.682 19:06:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:52.682 19:06:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:52.682 EAL: No free 2048 kB hugepages reported on node 1 00:35:55.952 Initializing NVMe Controllers 00:35:55.952 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:55.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:55.952 Initialization complete. Launching workers. 00:35:55.953 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8556, failed: 0 00:35:55.953 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1254, failed to submit 7302 00:35:55.953 success 311, unsuccess 943, failed 0 00:35:55.953 19:06:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:55.953 19:06:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:55.953 EAL: No free 2048 kB hugepages reported on node 1 00:35:58.489 Initializing NVMe Controllers 00:35:58.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:58.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:58.489 Initialization complete. Launching workers. 00:35:58.489 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31059, failed: 0 00:35:58.489 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2751, failed to submit 28308 00:35:58.489 success 539, unsuccess 2212, failed 0 00:35:58.489 19:06:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:58.489 19:06:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.489 19:06:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:58.489 19:06:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.489 19:06:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:58.489 19:06:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.489 19:06:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:59.877 19:06:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.877 19:06:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1563531 00:35:59.878 19:06:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 1563531 ']' 00:35:59.878 19:06:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 1563531 00:35:59.878 19:06:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:35:59.878 19:06:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:59.878 19:06:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1563531 00:35:59.878 19:06:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:59.878 19:06:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:59.878 19:06:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1563531' 00:35:59.878 killing process with pid 1563531 00:35:59.878 19:06:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 1563531 00:35:59.878 19:06:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 1563531 00:36:00.136 00:36:00.136 real 0m14.012s 00:36:00.136 user 0m52.998s 00:36:00.136 sys 0m2.648s 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:00.136 ************************************ 00:36:00.136 END TEST spdk_target_abort 00:36:00.136 ************************************ 00:36:00.136 19:06:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:00.136 19:06:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:00.136 19:06:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:00.136 19:06:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:00.136 ************************************ 00:36:00.136 START TEST kernel_target_abort 00:36:00.136 ************************************ 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:00.136 19:06:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:01.506 Waiting for block devices as requested 00:36:01.506 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:01.506 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:01.506 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:01.506 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:01.506 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:01.506 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:01.764 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:01.764 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:01.764 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:01.764 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:02.022 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:02.022 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:02.022 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:02.279 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:02.279 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:02.279 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:02.279 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:02.538 No valid GPT data, bailing 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:02.538 00:36:02.538 Discovery Log Number of Records 2, Generation counter 2 00:36:02.538 =====Discovery Log Entry 0====== 00:36:02.538 trtype: tcp 00:36:02.538 adrfam: ipv4 00:36:02.538 subtype: current discovery subsystem 00:36:02.538 treq: not specified, sq flow control disable supported 00:36:02.538 portid: 1 00:36:02.538 trsvcid: 4420 00:36:02.538 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:02.538 traddr: 10.0.0.1 00:36:02.538 eflags: none 00:36:02.538 sectype: none 00:36:02.538 =====Discovery Log Entry 1====== 00:36:02.538 trtype: tcp 00:36:02.538 adrfam: ipv4 00:36:02.538 subtype: nvme subsystem 00:36:02.538 treq: not specified, sq flow control disable supported 00:36:02.538 portid: 1 00:36:02.538 trsvcid: 4420 00:36:02.538 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:02.538 traddr: 10.0.0.1 00:36:02.538 eflags: none 00:36:02.538 sectype: none 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:02.538 19:06:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:02.538 EAL: No free 2048 kB hugepages reported on node 1 00:36:05.814 Initializing NVMe Controllers 00:36:05.814 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:05.814 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:05.814 Initialization complete. Launching workers. 00:36:05.814 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 21173, failed: 0 00:36:05.814 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21173, failed to submit 0 00:36:05.814 success 0, unsuccess 21173, failed 0 00:36:05.814 19:06:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:05.815 19:06:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:05.815 EAL: No free 2048 kB hugepages reported on node 1 00:36:09.093 Initializing NVMe Controllers 00:36:09.093 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:09.093 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:09.093 Initialization complete. Launching workers. 00:36:09.093 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 45862, failed: 0 00:36:09.093 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 11546, failed to submit 34316 00:36:09.093 success 0, unsuccess 11546, failed 0 00:36:09.093 19:06:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:09.093 19:06:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:09.093 EAL: No free 2048 kB hugepages reported on node 1 00:36:12.379 Initializing NVMe Controllers 00:36:12.379 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:12.379 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:12.379 Initialization complete. Launching workers. 00:36:12.379 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 47235, failed: 0 00:36:12.379 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 11758, failed to submit 35477 00:36:12.379 success 0, unsuccess 11758, failed 0 00:36:12.379 19:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:12.379 19:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:12.379 19:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:12.379 19:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:12.379 19:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:12.379 19:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:12.379 19:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:12.379 19:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:12.379 19:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:12.379 19:06:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:12.681 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:12.681 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:12.681 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:12.681 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:12.681 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:12.681 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:12.681 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:12.938 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:12.938 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:12.938 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:12.938 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:12.938 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:12.938 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:12.938 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:12.938 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:12.938 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:13.871 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:13.871 00:36:13.871 real 0m13.715s 00:36:13.871 user 0m3.626s 00:36:13.871 sys 0m3.121s 00:36:13.871 19:06:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:13.871 19:06:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:13.871 ************************************ 00:36:13.871 END TEST kernel_target_abort 00:36:13.871 ************************************ 00:36:13.871 19:06:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:13.871 19:06:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:13.871 19:06:24 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:13.871 19:06:24 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:13.871 19:06:24 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:13.872 19:06:24 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:13.872 19:06:24 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:13.872 19:06:24 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:13.872 rmmod nvme_tcp 00:36:13.872 rmmod nvme_fabrics 00:36:14.130 rmmod nvme_keyring 00:36:14.130 19:06:24 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:14.130 19:06:24 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:14.130 19:06:24 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:14.130 19:06:24 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1563531 ']' 00:36:14.130 19:06:24 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1563531 00:36:14.130 19:06:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 1563531 ']' 00:36:14.130 19:06:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 1563531 00:36:14.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1563531) - No such process 00:36:14.130 19:06:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 1563531 is not found' 00:36:14.130 Process with pid 1563531 is not found 00:36:14.130 19:06:24 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:14.130 19:06:24 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:15.062 Waiting for block devices as requested 00:36:15.062 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:15.062 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:15.062 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:15.320 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:15.320 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:15.320 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:15.321 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:15.579 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:15.579 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:15.579 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:15.579 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:15.837 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:15.837 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:15.837 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:15.837 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:15.837 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:16.095 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:16.095 19:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:16.095 19:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:16.095 19:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:16.095 19:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:16.095 19:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:16.095 19:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:16.095 19:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:17.996 19:06:28 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:17.996 00:36:17.996 real 0m36.670s 00:36:17.996 user 0m58.566s 00:36:17.996 sys 0m8.881s 00:36:17.996 19:06:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:17.996 19:06:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:17.996 ************************************ 00:36:17.996 END TEST nvmf_abort_qd_sizes 00:36:17.996 ************************************ 00:36:18.253 19:06:28 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:18.253 19:06:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:18.253 19:06:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:18.253 19:06:28 -- common/autotest_common.sh@10 -- # set +x 00:36:18.253 ************************************ 00:36:18.253 START TEST keyring_file 00:36:18.253 ************************************ 00:36:18.253 19:06:28 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:18.253 * Looking for test storage... 00:36:18.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:18.253 19:06:28 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:18.253 19:06:28 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:18.253 19:06:28 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:18.253 19:06:28 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:18.253 19:06:28 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.253 19:06:28 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.253 19:06:28 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.253 19:06:28 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:18.253 19:06:28 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:18.253 19:06:28 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:18.253 19:06:28 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:18.253 19:06:28 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:18.253 19:06:28 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:18.253 19:06:28 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:18.253 19:06:28 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wv9NbNQoHJ 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wv9NbNQoHJ 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wv9NbNQoHJ 00:36:18.253 19:06:28 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.wv9NbNQoHJ 00:36:18.253 19:06:28 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oe1JU9zerg 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:18.253 19:06:28 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oe1JU9zerg 00:36:18.253 19:06:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oe1JU9zerg 00:36:18.253 19:06:28 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.oe1JU9zerg 00:36:18.253 19:06:28 keyring_file -- keyring/file.sh@30 -- # tgtpid=1569663 00:36:18.253 19:06:28 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:18.253 19:06:28 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1569663 00:36:18.253 19:06:28 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1569663 ']' 00:36:18.253 19:06:28 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.253 19:06:28 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:18.253 19:06:28 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.253 19:06:28 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:18.253 19:06:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:18.253 [2024-07-20 19:06:28.559354] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:18.253 [2024-07-20 19:06:28.559450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569663 ] 00:36:18.511 EAL: No free 2048 kB hugepages reported on node 1 00:36:18.511 [2024-07-20 19:06:28.622474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.511 [2024-07-20 19:06:28.709057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:18.771 19:06:28 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:18.771 [2024-07-20 19:06:28.938984] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:18.771 null0 00:36:18.771 [2024-07-20 19:06:28.971032] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:18.771 [2024-07-20 19:06:28.971474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:18.771 [2024-07-20 19:06:28.979058] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.771 19:06:28 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:18.771 [2024-07-20 19:06:28.987084] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:18.771 request: 00:36:18.771 { 00:36:18.771 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:18.771 "secure_channel": false, 00:36:18.771 "listen_address": { 00:36:18.771 "trtype": "tcp", 00:36:18.771 "traddr": "127.0.0.1", 00:36:18.771 "trsvcid": "4420" 00:36:18.771 }, 00:36:18.771 "method": "nvmf_subsystem_add_listener", 00:36:18.771 "req_id": 1 00:36:18.771 } 00:36:18.771 Got JSON-RPC error response 00:36:18.771 response: 00:36:18.771 { 00:36:18.771 "code": -32602, 00:36:18.771 "message": "Invalid parameters" 00:36:18.771 } 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:18.771 19:06:28 keyring_file -- keyring/file.sh@46 -- # bperfpid=1569790 00:36:18.771 19:06:28 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:18.771 19:06:28 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1569790 /var/tmp/bperf.sock 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1569790 ']' 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:18.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:18.771 19:06:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:18.771 [2024-07-20 19:06:29.031069] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:18.771 [2024-07-20 19:06:29.031152] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569790 ] 00:36:18.771 EAL: No free 2048 kB hugepages reported on node 1 00:36:18.771 [2024-07-20 19:06:29.089341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.029 [2024-07-20 19:06:29.177613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.029 19:06:29 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:19.029 19:06:29 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:19.029 19:06:29 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wv9NbNQoHJ 00:36:19.029 19:06:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wv9NbNQoHJ 00:36:19.286 19:06:29 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.oe1JU9zerg 00:36:19.286 19:06:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.oe1JU9zerg 00:36:19.549 19:06:29 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:19.549 19:06:29 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:19.549 19:06:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.549 19:06:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.549 19:06:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:19.820 19:06:30 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.wv9NbNQoHJ == \/\t\m\p\/\t\m\p\.\w\v\9\N\b\N\Q\o\H\J ]] 00:36:19.820 19:06:30 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:19.820 19:06:30 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:19.820 19:06:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.820 19:06:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:19.820 19:06:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.077 19:06:30 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.oe1JU9zerg == \/\t\m\p\/\t\m\p\.\o\e\1\J\U\9\z\e\r\g ]] 00:36:20.077 19:06:30 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:20.077 19:06:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:20.077 19:06:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:20.077 19:06:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:20.077 19:06:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.077 19:06:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:20.334 19:06:30 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:20.334 19:06:30 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:20.334 19:06:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:20.334 19:06:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:20.334 19:06:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:20.334 19:06:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.334 19:06:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:20.591 19:06:30 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:20.591 19:06:30 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:20.591 19:06:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:20.849 [2024-07-20 19:06:31.014299] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:20.849 nvme0n1 00:36:20.849 19:06:31 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:20.849 19:06:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:20.849 19:06:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:20.849 19:06:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:20.849 19:06:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.849 19:06:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:21.106 19:06:31 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:21.106 19:06:31 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:21.106 19:06:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:21.106 19:06:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:21.106 19:06:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:21.106 19:06:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:21.106 19:06:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:21.364 19:06:31 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:21.364 19:06:31 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:21.621 Running I/O for 1 seconds... 00:36:22.553 00:36:22.553 Latency(us) 00:36:22.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.553 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:22.553 nvme0n1 : 1.03 3080.32 12.03 0.00 0.00 41136.97 9854.67 55924.05 00:36:22.553 =================================================================================================================== 00:36:22.553 Total : 3080.32 12.03 0.00 0.00 41136.97 9854.67 55924.05 00:36:22.553 0 00:36:22.553 19:06:32 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:22.553 19:06:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:22.812 19:06:32 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:22.812 19:06:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:22.812 19:06:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.812 19:06:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.812 19:06:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.812 19:06:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:23.071 19:06:33 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:23.071 19:06:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:23.071 19:06:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:23.071 19:06:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:23.071 19:06:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:23.071 19:06:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.071 19:06:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:23.328 19:06:33 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:23.328 19:06:33 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:23.328 19:06:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:23.328 19:06:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:23.328 19:06:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:23.328 19:06:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:23.328 19:06:33 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:23.328 19:06:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:23.328 19:06:33 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:23.328 19:06:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:23.585 [2024-07-20 19:06:33.741633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c593d0 (107): Transport endpoint is not connected 00:36:23.585 [2024-07-20 19:06:33.741655] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:23.585 [2024-07-20 19:06:33.742622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c593d0 (9): Bad file descriptor 00:36:23.585 [2024-07-20 19:06:33.743619] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:23.585 [2024-07-20 19:06:33.743643] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:23.585 [2024-07-20 19:06:33.743659] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:23.585 request: 00:36:23.585 { 00:36:23.585 "name": "nvme0", 00:36:23.585 "trtype": "tcp", 00:36:23.585 "traddr": "127.0.0.1", 00:36:23.585 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:23.585 "adrfam": "ipv4", 00:36:23.585 "trsvcid": "4420", 00:36:23.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:23.585 "psk": "key1", 00:36:23.585 "method": "bdev_nvme_attach_controller", 00:36:23.585 "req_id": 1 00:36:23.585 } 00:36:23.585 Got JSON-RPC error response 00:36:23.585 response: 00:36:23.585 { 00:36:23.585 "code": -5, 00:36:23.585 "message": "Input/output error" 00:36:23.585 } 00:36:23.585 19:06:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:23.585 19:06:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:23.585 19:06:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:23.585 19:06:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:23.585 19:06:33 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:23.585 19:06:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:23.585 19:06:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:23.585 19:06:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:23.585 19:06:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:23.585 19:06:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.842 19:06:34 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:23.842 19:06:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:23.842 19:06:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:23.842 19:06:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:23.842 19:06:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:23.843 19:06:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.843 19:06:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:24.099 19:06:34 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:24.099 19:06:34 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:24.099 19:06:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:24.356 19:06:34 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:24.356 19:06:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:24.613 19:06:34 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:24.613 19:06:34 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:24.613 19:06:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:24.871 19:06:34 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:24.871 19:06:34 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.wv9NbNQoHJ 00:36:24.871 19:06:34 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.wv9NbNQoHJ 00:36:24.871 19:06:34 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:24.871 19:06:34 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.wv9NbNQoHJ 00:36:24.871 19:06:34 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:24.871 19:06:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:24.871 19:06:34 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:24.871 19:06:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:24.871 19:06:34 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wv9NbNQoHJ 00:36:24.871 19:06:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wv9NbNQoHJ 00:36:25.128 [2024-07-20 19:06:35.211367] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wv9NbNQoHJ': 0100660 00:36:25.128 [2024-07-20 19:06:35.211406] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:25.128 request: 00:36:25.128 { 00:36:25.128 "name": "key0", 00:36:25.128 "path": "/tmp/tmp.wv9NbNQoHJ", 00:36:25.128 "method": "keyring_file_add_key", 00:36:25.128 "req_id": 1 00:36:25.128 } 00:36:25.128 Got JSON-RPC error response 00:36:25.128 response: 00:36:25.128 { 00:36:25.128 "code": -1, 00:36:25.128 "message": "Operation not permitted" 00:36:25.128 } 00:36:25.128 19:06:35 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:25.128 19:06:35 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:25.128 19:06:35 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:25.128 19:06:35 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:25.128 19:06:35 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.wv9NbNQoHJ 00:36:25.128 19:06:35 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wv9NbNQoHJ 00:36:25.128 19:06:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wv9NbNQoHJ 00:36:25.384 19:06:35 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.wv9NbNQoHJ 00:36:25.384 19:06:35 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:25.384 19:06:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:25.384 19:06:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:25.384 19:06:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.384 19:06:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.384 19:06:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:25.641 19:06:35 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:25.641 19:06:35 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:25.641 19:06:35 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:25.641 19:06:35 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:25.641 19:06:35 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:25.641 19:06:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:25.641 19:06:35 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:25.641 19:06:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:25.641 19:06:35 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:25.641 19:06:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:25.641 [2024-07-20 19:06:35.961415] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.wv9NbNQoHJ': No such file or directory 00:36:25.641 [2024-07-20 19:06:35.961455] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:25.641 [2024-07-20 19:06:35.961496] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:25.641 [2024-07-20 19:06:35.961509] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:25.641 [2024-07-20 19:06:35.961522] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:25.904 request: 00:36:25.904 { 00:36:25.904 "name": "nvme0", 00:36:25.904 "trtype": "tcp", 00:36:25.904 "traddr": "127.0.0.1", 00:36:25.904 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:25.904 "adrfam": "ipv4", 00:36:25.904 "trsvcid": "4420", 00:36:25.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:25.904 "psk": "key0", 00:36:25.904 "method": "bdev_nvme_attach_controller", 00:36:25.904 "req_id": 1 00:36:25.904 } 00:36:25.904 Got JSON-RPC error response 00:36:25.904 response: 00:36:25.904 { 00:36:25.904 "code": -19, 00:36:25.904 "message": "No such device" 00:36:25.904 } 00:36:25.904 19:06:35 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:25.904 19:06:35 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:25.904 19:06:35 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:25.904 19:06:35 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:25.904 19:06:35 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:25.904 19:06:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:26.208 19:06:36 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:26.208 19:06:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:26.208 19:06:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:26.208 19:06:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:26.208 19:06:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:26.208 19:06:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:26.208 19:06:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AU3Ir6l3UH 00:36:26.208 19:06:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:26.208 19:06:36 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:26.208 19:06:36 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:26.208 19:06:36 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:26.208 19:06:36 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:26.208 19:06:36 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:26.208 19:06:36 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:26.208 19:06:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AU3Ir6l3UH 00:36:26.208 19:06:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AU3Ir6l3UH 00:36:26.208 19:06:36 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.AU3Ir6l3UH 00:36:26.208 19:06:36 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AU3Ir6l3UH 00:36:26.208 19:06:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AU3Ir6l3UH 00:36:26.465 19:06:36 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:26.465 19:06:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:26.723 nvme0n1 00:36:26.723 19:06:36 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:26.723 19:06:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:26.723 19:06:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:26.723 19:06:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:26.723 19:06:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.723 19:06:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:26.979 19:06:37 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:26.979 19:06:37 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:26.980 19:06:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:27.236 19:06:37 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:27.236 19:06:37 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:27.236 19:06:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:27.236 19:06:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.236 19:06:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:27.493 19:06:37 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:27.493 19:06:37 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:27.493 19:06:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:27.493 19:06:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:27.493 19:06:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:27.493 19:06:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:27.493 19:06:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.750 19:06:37 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:27.750 19:06:37 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:27.751 19:06:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:28.009 19:06:38 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:28.009 19:06:38 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:28.009 19:06:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.009 19:06:38 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:28.009 19:06:38 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AU3Ir6l3UH 00:36:28.009 19:06:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AU3Ir6l3UH 00:36:28.266 19:06:38 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.oe1JU9zerg 00:36:28.266 19:06:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.oe1JU9zerg 00:36:28.524 19:06:38 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:28.524 19:06:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:29.089 nvme0n1 00:36:29.089 19:06:39 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:29.089 19:06:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:29.348 19:06:39 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:29.348 "subsystems": [ 00:36:29.348 { 00:36:29.348 "subsystem": "keyring", 00:36:29.348 "config": [ 00:36:29.348 { 00:36:29.348 "method": "keyring_file_add_key", 00:36:29.348 "params": { 00:36:29.348 "name": "key0", 00:36:29.348 "path": "/tmp/tmp.AU3Ir6l3UH" 00:36:29.348 } 00:36:29.348 }, 00:36:29.348 { 00:36:29.348 "method": "keyring_file_add_key", 00:36:29.348 "params": { 00:36:29.348 "name": "key1", 00:36:29.349 "path": "/tmp/tmp.oe1JU9zerg" 00:36:29.349 } 00:36:29.349 } 00:36:29.349 ] 00:36:29.349 }, 00:36:29.349 { 00:36:29.349 "subsystem": "iobuf", 00:36:29.349 "config": [ 00:36:29.349 { 00:36:29.349 "method": "iobuf_set_options", 00:36:29.349 "params": { 00:36:29.349 "small_pool_count": 8192, 00:36:29.349 "large_pool_count": 1024, 00:36:29.349 "small_bufsize": 8192, 00:36:29.349 "large_bufsize": 135168 00:36:29.349 } 00:36:29.349 } 00:36:29.349 ] 00:36:29.349 }, 00:36:29.349 { 00:36:29.349 "subsystem": "sock", 00:36:29.349 "config": [ 00:36:29.349 { 00:36:29.349 "method": "sock_set_default_impl", 00:36:29.349 "params": { 00:36:29.349 "impl_name": "posix" 00:36:29.349 } 00:36:29.349 }, 00:36:29.349 { 00:36:29.349 "method": "sock_impl_set_options", 00:36:29.349 "params": { 00:36:29.349 "impl_name": "ssl", 00:36:29.349 "recv_buf_size": 4096, 00:36:29.349 "send_buf_size": 4096, 00:36:29.349 "enable_recv_pipe": true, 00:36:29.349 "enable_quickack": false, 00:36:29.349 "enable_placement_id": 0, 00:36:29.349 "enable_zerocopy_send_server": true, 00:36:29.349 "enable_zerocopy_send_client": false, 00:36:29.349 "zerocopy_threshold": 0, 00:36:29.349 "tls_version": 0, 00:36:29.349 "enable_ktls": false 00:36:29.349 } 00:36:29.349 }, 00:36:29.349 { 00:36:29.349 "method": "sock_impl_set_options", 00:36:29.349 "params": { 00:36:29.349 "impl_name": "posix", 00:36:29.349 "recv_buf_size": 2097152, 00:36:29.349 "send_buf_size": 2097152, 00:36:29.349 "enable_recv_pipe": true, 00:36:29.349 "enable_quickack": false, 00:36:29.349 "enable_placement_id": 0, 00:36:29.349 "enable_zerocopy_send_server": true, 00:36:29.349 "enable_zerocopy_send_client": false, 00:36:29.349 "zerocopy_threshold": 0, 00:36:29.349 "tls_version": 0, 00:36:29.349 "enable_ktls": false 00:36:29.349 } 00:36:29.349 } 00:36:29.349 ] 00:36:29.349 }, 00:36:29.349 { 00:36:29.349 "subsystem": "vmd", 00:36:29.349 "config": [] 00:36:29.349 }, 00:36:29.349 { 00:36:29.349 "subsystem": "accel", 00:36:29.349 "config": [ 00:36:29.349 { 00:36:29.349 "method": "accel_set_options", 00:36:29.349 "params": { 00:36:29.349 "small_cache_size": 128, 00:36:29.349 "large_cache_size": 16, 00:36:29.349 "task_count": 2048, 00:36:29.349 "sequence_count": 2048, 00:36:29.349 "buf_count": 2048 00:36:29.349 } 00:36:29.349 } 00:36:29.349 ] 00:36:29.349 }, 00:36:29.349 { 00:36:29.349 "subsystem": "bdev", 00:36:29.349 "config": [ 00:36:29.349 { 00:36:29.349 "method": "bdev_set_options", 00:36:29.349 "params": { 00:36:29.349 "bdev_io_pool_size": 65535, 00:36:29.349 "bdev_io_cache_size": 256, 00:36:29.349 "bdev_auto_examine": true, 00:36:29.349 "iobuf_small_cache_size": 128, 00:36:29.349 "iobuf_large_cache_size": 16 00:36:29.349 } 00:36:29.349 }, 00:36:29.349 { 00:36:29.349 "method": "bdev_raid_set_options", 00:36:29.349 "params": { 00:36:29.349 "process_window_size_kb": 1024 00:36:29.349 } 00:36:29.349 }, 00:36:29.349 { 00:36:29.349 "method": "bdev_iscsi_set_options", 00:36:29.349 "params": { 00:36:29.349 "timeout_sec": 30 00:36:29.349 } 00:36:29.349 }, 00:36:29.349 { 00:36:29.349 "method": "bdev_nvme_set_options", 00:36:29.349 "params": { 00:36:29.349 "action_on_timeout": "none", 00:36:29.349 "timeout_us": 0, 00:36:29.349 "timeout_admin_us": 0, 00:36:29.349 "keep_alive_timeout_ms": 10000, 00:36:29.349 "arbitration_burst": 0, 00:36:29.349 "low_priority_weight": 0, 00:36:29.349 "medium_priority_weight": 0, 00:36:29.349 "high_priority_weight": 0, 00:36:29.349 "nvme_adminq_poll_period_us": 10000, 00:36:29.349 "nvme_ioq_poll_period_us": 0, 00:36:29.349 "io_queue_requests": 512, 00:36:29.349 "delay_cmd_submit": true, 00:36:29.349 "transport_retry_count": 4, 00:36:29.349 "bdev_retry_count": 3, 00:36:29.349 "transport_ack_timeout": 0, 00:36:29.349 "ctrlr_loss_timeout_sec": 0, 00:36:29.349 "reconnect_delay_sec": 0, 00:36:29.349 "fast_io_fail_timeout_sec": 0, 00:36:29.349 "disable_auto_failback": false, 00:36:29.349 "generate_uuids": false, 00:36:29.349 "transport_tos": 0, 00:36:29.349 "nvme_error_stat": false, 00:36:29.349 "rdma_srq_size": 0, 00:36:29.349 "io_path_stat": false, 00:36:29.349 "allow_accel_sequence": false, 00:36:29.349 "rdma_max_cq_size": 0, 00:36:29.349 "rdma_cm_event_timeout_ms": 0, 00:36:29.349 "dhchap_digests": [ 00:36:29.349 "sha256", 00:36:29.349 "sha384", 00:36:29.349 "sha512" 00:36:29.349 ], 00:36:29.349 "dhchap_dhgroups": [ 00:36:29.349 "null", 00:36:29.349 "ffdhe2048", 00:36:29.349 "ffdhe3072", 00:36:29.349 "ffdhe4096", 00:36:29.349 "ffdhe6144", 00:36:29.349 "ffdhe8192" 00:36:29.349 ] 00:36:29.349 } 00:36:29.349 }, 00:36:29.349 { 00:36:29.349 "method": "bdev_nvme_attach_controller", 00:36:29.349 "params": { 00:36:29.349 "name": "nvme0", 00:36:29.349 "trtype": "TCP", 00:36:29.349 "adrfam": "IPv4", 00:36:29.349 "traddr": "127.0.0.1", 00:36:29.349 "trsvcid": "4420", 00:36:29.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:29.350 "prchk_reftag": false, 00:36:29.350 "prchk_guard": false, 00:36:29.350 "ctrlr_loss_timeout_sec": 0, 00:36:29.350 "reconnect_delay_sec": 0, 00:36:29.350 "fast_io_fail_timeout_sec": 0, 00:36:29.350 "psk": "key0", 00:36:29.350 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:29.350 "hdgst": false, 00:36:29.350 "ddgst": false 00:36:29.350 } 00:36:29.350 }, 00:36:29.350 { 00:36:29.350 "method": "bdev_nvme_set_hotplug", 00:36:29.350 "params": { 00:36:29.350 "period_us": 100000, 00:36:29.350 "enable": false 00:36:29.350 } 00:36:29.350 }, 00:36:29.350 { 00:36:29.350 "method": "bdev_wait_for_examine" 00:36:29.350 } 00:36:29.350 ] 00:36:29.350 }, 00:36:29.350 { 00:36:29.350 "subsystem": "nbd", 00:36:29.350 "config": [] 00:36:29.350 } 00:36:29.350 ] 00:36:29.350 }' 00:36:29.350 19:06:39 keyring_file -- keyring/file.sh@114 -- # killprocess 1569790 00:36:29.350 19:06:39 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1569790 ']' 00:36:29.350 19:06:39 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1569790 00:36:29.350 19:06:39 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:29.350 19:06:39 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:29.350 19:06:39 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1569790 00:36:29.350 19:06:39 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:29.350 19:06:39 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:29.350 19:06:39 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1569790' 00:36:29.350 killing process with pid 1569790 00:36:29.350 19:06:39 keyring_file -- common/autotest_common.sh@965 -- # kill 1569790 00:36:29.350 Received shutdown signal, test time was about 1.000000 seconds 00:36:29.350 00:36:29.350 Latency(us) 00:36:29.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:29.350 =================================================================================================================== 00:36:29.350 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:29.350 19:06:39 keyring_file -- common/autotest_common.sh@970 -- # wait 1569790 00:36:29.609 19:06:39 keyring_file -- keyring/file.sh@117 -- # bperfpid=1571130 00:36:29.609 19:06:39 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1571130 /var/tmp/bperf.sock 00:36:29.609 19:06:39 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1571130 ']' 00:36:29.609 19:06:39 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:29.609 19:06:39 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:29.609 19:06:39 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:29.609 19:06:39 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:29.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:29.609 19:06:39 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:29.609 19:06:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:29.609 19:06:39 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:29.609 "subsystems": [ 00:36:29.609 { 00:36:29.609 "subsystem": "keyring", 00:36:29.609 "config": [ 00:36:29.609 { 00:36:29.609 "method": "keyring_file_add_key", 00:36:29.609 "params": { 00:36:29.609 "name": "key0", 00:36:29.609 "path": "/tmp/tmp.AU3Ir6l3UH" 00:36:29.609 } 00:36:29.609 }, 00:36:29.609 { 00:36:29.609 "method": "keyring_file_add_key", 00:36:29.609 "params": { 00:36:29.609 "name": "key1", 00:36:29.609 "path": "/tmp/tmp.oe1JU9zerg" 00:36:29.609 } 00:36:29.609 } 00:36:29.609 ] 00:36:29.609 }, 00:36:29.609 { 00:36:29.609 "subsystem": "iobuf", 00:36:29.609 "config": [ 00:36:29.609 { 00:36:29.609 "method": "iobuf_set_options", 00:36:29.609 "params": { 00:36:29.609 "small_pool_count": 8192, 00:36:29.609 "large_pool_count": 1024, 00:36:29.609 "small_bufsize": 8192, 00:36:29.609 "large_bufsize": 135168 00:36:29.609 } 00:36:29.609 } 00:36:29.609 ] 00:36:29.609 }, 00:36:29.609 { 00:36:29.609 "subsystem": "sock", 00:36:29.609 "config": [ 00:36:29.609 { 00:36:29.609 "method": "sock_set_default_impl", 00:36:29.609 "params": { 00:36:29.609 "impl_name": "posix" 00:36:29.609 } 00:36:29.609 }, 00:36:29.609 { 00:36:29.609 "method": "sock_impl_set_options", 00:36:29.609 "params": { 00:36:29.609 "impl_name": "ssl", 00:36:29.609 "recv_buf_size": 4096, 00:36:29.609 "send_buf_size": 4096, 00:36:29.610 "enable_recv_pipe": true, 00:36:29.610 "enable_quickack": false, 00:36:29.610 "enable_placement_id": 0, 00:36:29.610 "enable_zerocopy_send_server": true, 00:36:29.610 "enable_zerocopy_send_client": false, 00:36:29.610 "zerocopy_threshold": 0, 00:36:29.610 "tls_version": 0, 00:36:29.610 "enable_ktls": false 00:36:29.610 } 00:36:29.610 }, 00:36:29.610 { 00:36:29.610 "method": "sock_impl_set_options", 00:36:29.610 "params": { 00:36:29.610 "impl_name": "posix", 00:36:29.610 "recv_buf_size": 2097152, 00:36:29.610 "send_buf_size": 2097152, 00:36:29.610 "enable_recv_pipe": true, 00:36:29.610 "enable_quickack": false, 00:36:29.610 "enable_placement_id": 0, 00:36:29.610 "enable_zerocopy_send_server": true, 00:36:29.610 "enable_zerocopy_send_client": false, 00:36:29.610 "zerocopy_threshold": 0, 00:36:29.610 "tls_version": 0, 00:36:29.610 "enable_ktls": false 00:36:29.610 } 00:36:29.610 } 00:36:29.610 ] 00:36:29.610 }, 00:36:29.610 { 00:36:29.610 "subsystem": "vmd", 00:36:29.610 "config": [] 00:36:29.610 }, 00:36:29.610 { 00:36:29.610 "subsystem": "accel", 00:36:29.610 "config": [ 00:36:29.610 { 00:36:29.610 "method": "accel_set_options", 00:36:29.610 "params": { 00:36:29.610 "small_cache_size": 128, 00:36:29.610 "large_cache_size": 16, 00:36:29.610 "task_count": 2048, 00:36:29.610 "sequence_count": 2048, 00:36:29.610 "buf_count": 2048 00:36:29.610 } 00:36:29.610 } 00:36:29.610 ] 00:36:29.610 }, 00:36:29.610 { 00:36:29.610 "subsystem": "bdev", 00:36:29.610 "config": [ 00:36:29.610 { 00:36:29.610 "method": "bdev_set_options", 00:36:29.610 "params": { 00:36:29.610 "bdev_io_pool_size": 65535, 00:36:29.610 "bdev_io_cache_size": 256, 00:36:29.610 "bdev_auto_examine": true, 00:36:29.610 "iobuf_small_cache_size": 128, 00:36:29.610 "iobuf_large_cache_size": 16 00:36:29.610 } 00:36:29.610 }, 00:36:29.610 { 00:36:29.610 "method": "bdev_raid_set_options", 00:36:29.610 "params": { 00:36:29.610 "process_window_size_kb": 1024 00:36:29.610 } 00:36:29.610 }, 00:36:29.610 { 00:36:29.610 "method": "bdev_iscsi_set_options", 00:36:29.610 "params": { 00:36:29.610 "timeout_sec": 30 00:36:29.610 } 00:36:29.610 }, 00:36:29.610 { 00:36:29.610 "method": "bdev_nvme_set_options", 00:36:29.610 "params": { 00:36:29.610 "action_on_timeout": "none", 00:36:29.610 "timeout_us": 0, 00:36:29.610 "timeout_admin_us": 0, 00:36:29.610 "keep_alive_timeout_ms": 10000, 00:36:29.610 "arbitration_burst": 0, 00:36:29.610 "low_priority_weight": 0, 00:36:29.610 "medium_priority_weight": 0, 00:36:29.610 "high_priority_weight": 0, 00:36:29.610 "nvme_adminq_poll_period_us": 10000, 00:36:29.610 "nvme_ioq_poll_period_us": 0, 00:36:29.610 "io_queue_requests": 512, 00:36:29.610 "delay_cmd_submit": true, 00:36:29.610 "transport_retry_count": 4, 00:36:29.610 "bdev_retry_count": 3, 00:36:29.610 "transport_ack_timeout": 0, 00:36:29.610 "ctrlr_loss_timeout_sec": 0, 00:36:29.610 "reconnect_delay_sec": 0, 00:36:29.610 "fast_io_fail_timeout_sec": 0, 00:36:29.610 "disable_auto_failback": false, 00:36:29.610 "generate_uuids": false, 00:36:29.610 "transport_tos": 0, 00:36:29.610 "nvme_error_stat": false, 00:36:29.610 "rdma_srq_size": 0, 00:36:29.610 "io_path_stat": false, 00:36:29.610 "allow_accel_sequence": false, 00:36:29.610 "rdma_max_cq_size": 0, 00:36:29.610 "rdma_cm_event_timeout_ms": 0, 00:36:29.610 "dhchap_digests": [ 00:36:29.610 "sha256", 00:36:29.610 "sha384", 00:36:29.610 "sha512" 00:36:29.610 ], 00:36:29.610 "dhchap_dhgroups": [ 00:36:29.610 "null", 00:36:29.610 "ffdhe2048", 00:36:29.610 "ffdhe3072", 00:36:29.610 "ffdhe4096", 00:36:29.610 "ffdhe6144", 00:36:29.610 "ffdhe8192" 00:36:29.610 ] 00:36:29.610 } 00:36:29.610 }, 00:36:29.610 { 00:36:29.610 "method": "bdev_nvme_attach_controller", 00:36:29.610 "params": { 00:36:29.610 "name": "nvme0", 00:36:29.610 "trtype": "TCP", 00:36:29.610 "adrfam": "IPv4", 00:36:29.610 "traddr": "127.0.0.1", 00:36:29.610 "trsvcid": "4420", 00:36:29.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:29.610 "prchk_reftag": false, 00:36:29.610 "prchk_guard": false, 00:36:29.610 "ctrlr_loss_timeout_sec": 0, 00:36:29.610 "reconnect_delay_sec": 0, 00:36:29.610 "fast_io_fail_timeout_sec": 0, 00:36:29.610 "psk": "key0", 00:36:29.610 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:29.610 "hdgst": false, 00:36:29.610 "ddgst": false 00:36:29.610 } 00:36:29.610 }, 00:36:29.610 { 00:36:29.610 "method": "bdev_nvme_set_hotplug", 00:36:29.610 "params": { 00:36:29.610 "period_us": 100000, 00:36:29.610 "enable": false 00:36:29.610 } 00:36:29.610 }, 00:36:29.610 { 00:36:29.610 "method": "bdev_wait_for_examine" 00:36:29.610 } 00:36:29.610 ] 00:36:29.610 }, 00:36:29.610 { 00:36:29.610 "subsystem": "nbd", 00:36:29.610 "config": [] 00:36:29.610 } 00:36:29.610 ] 00:36:29.610 }' 00:36:29.610 [2024-07-20 19:06:39.724837] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:29.611 [2024-07-20 19:06:39.724928] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571130 ] 00:36:29.611 EAL: No free 2048 kB hugepages reported on node 1 00:36:29.611 [2024-07-20 19:06:39.783278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.611 [2024-07-20 19:06:39.869189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.869 [2024-07-20 19:06:40.055806] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:30.435 19:06:40 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:30.435 19:06:40 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:30.435 19:06:40 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:30.435 19:06:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:30.435 19:06:40 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:30.694 19:06:40 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:30.694 19:06:40 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:30.694 19:06:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:30.694 19:06:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:30.694 19:06:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:30.694 19:06:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:30.694 19:06:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:30.952 19:06:41 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:30.952 19:06:41 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:30.952 19:06:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:30.952 19:06:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:30.952 19:06:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:30.952 19:06:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:30.952 19:06:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:31.211 19:06:41 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:31.211 19:06:41 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:31.211 19:06:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:31.211 19:06:41 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:31.470 19:06:41 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:31.470 19:06:41 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:31.470 19:06:41 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.AU3Ir6l3UH /tmp/tmp.oe1JU9zerg 00:36:31.470 19:06:41 keyring_file -- keyring/file.sh@20 -- # killprocess 1571130 00:36:31.470 19:06:41 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1571130 ']' 00:36:31.470 19:06:41 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1571130 00:36:31.470 19:06:41 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:31.470 19:06:41 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:31.470 19:06:41 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1571130 00:36:31.470 19:06:41 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:31.470 19:06:41 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:31.470 19:06:41 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1571130' 00:36:31.470 killing process with pid 1571130 00:36:31.470 19:06:41 keyring_file -- common/autotest_common.sh@965 -- # kill 1571130 00:36:31.470 Received shutdown signal, test time was about 1.000000 seconds 00:36:31.470 00:36:31.470 Latency(us) 00:36:31.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:31.470 =================================================================================================================== 00:36:31.470 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:31.470 19:06:41 keyring_file -- common/autotest_common.sh@970 -- # wait 1571130 00:36:31.729 19:06:41 keyring_file -- keyring/file.sh@21 -- # killprocess 1569663 00:36:31.729 19:06:41 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1569663 ']' 00:36:31.729 19:06:41 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1569663 00:36:31.729 19:06:41 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:31.729 19:06:41 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:31.729 19:06:41 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1569663 00:36:31.729 19:06:41 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:31.729 19:06:41 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:31.729 19:06:41 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1569663' 00:36:31.729 killing process with pid 1569663 00:36:31.729 19:06:41 keyring_file -- common/autotest_common.sh@965 -- # kill 1569663 00:36:31.729 [2024-07-20 19:06:41.970346] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:31.729 19:06:41 keyring_file -- common/autotest_common.sh@970 -- # wait 1569663 00:36:32.296 00:36:32.296 real 0m13.990s 00:36:32.296 user 0m34.429s 00:36:32.296 sys 0m3.197s 00:36:32.296 19:06:42 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:32.296 19:06:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:32.296 ************************************ 00:36:32.296 END TEST keyring_file 00:36:32.296 ************************************ 00:36:32.296 19:06:42 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:36:32.296 19:06:42 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:32.296 19:06:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:32.296 19:06:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:32.296 19:06:42 -- common/autotest_common.sh@10 -- # set +x 00:36:32.296 ************************************ 00:36:32.296 START TEST keyring_linux 00:36:32.296 ************************************ 00:36:32.296 19:06:42 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:32.296 * Looking for test storage... 00:36:32.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:32.296 19:06:42 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:32.296 19:06:42 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:32.296 19:06:42 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:32.296 19:06:42 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:32.296 19:06:42 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:32.296 19:06:42 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:32.296 19:06:42 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.296 19:06:42 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.297 19:06:42 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.297 19:06:42 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:32.297 19:06:42 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:32.297 19:06:42 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:32.297 19:06:42 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:32.297 19:06:42 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:32.297 19:06:42 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:32.297 19:06:42 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:32.297 19:06:42 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:32.297 /tmp/:spdk-test:key0 00:36:32.297 19:06:42 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:32.297 19:06:42 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:32.297 19:06:42 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:32.297 /tmp/:spdk-test:key1 00:36:32.297 19:06:42 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1571609 00:36:32.297 19:06:42 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:32.297 19:06:42 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1571609 00:36:32.297 19:06:42 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 1571609 ']' 00:36:32.297 19:06:42 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:32.297 19:06:42 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:32.297 19:06:42 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:32.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:32.297 19:06:42 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:32.297 19:06:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:32.297 [2024-07-20 19:06:42.609131] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:32.297 [2024-07-20 19:06:42.609241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571609 ] 00:36:32.556 EAL: No free 2048 kB hugepages reported on node 1 00:36:32.556 [2024-07-20 19:06:42.668893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.556 [2024-07-20 19:06:42.757559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:32.817 19:06:43 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:32.817 19:06:43 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:36:32.817 19:06:43 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:32.817 19:06:43 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.817 19:06:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:32.817 [2024-07-20 19:06:43.015575] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:32.817 null0 00:36:32.817 [2024-07-20 19:06:43.047634] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:32.817 [2024-07-20 19:06:43.048188] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:32.817 19:06:43 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.817 19:06:43 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:32.817 100156637 00:36:32.817 19:06:43 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:32.817 189214613 00:36:32.817 19:06:43 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1571618 00:36:32.817 19:06:43 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:32.817 19:06:43 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1571618 /var/tmp/bperf.sock 00:36:32.817 19:06:43 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 1571618 ']' 00:36:32.817 19:06:43 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:32.817 19:06:43 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:32.817 19:06:43 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:32.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:32.817 19:06:43 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:32.817 19:06:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:32.817 [2024-07-20 19:06:43.113052] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:32.818 [2024-07-20 19:06:43.113131] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571618 ] 00:36:33.077 EAL: No free 2048 kB hugepages reported on node 1 00:36:33.077 [2024-07-20 19:06:43.175213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:33.077 [2024-07-20 19:06:43.265529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:33.077 19:06:43 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:33.077 19:06:43 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:36:33.077 19:06:43 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:33.077 19:06:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:33.335 19:06:43 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:33.335 19:06:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:33.593 19:06:43 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:33.593 19:06:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:33.850 [2024-07-20 19:06:44.137756] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:34.108 nvme0n1 00:36:34.108 19:06:44 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:34.108 19:06:44 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:34.108 19:06:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:34.108 19:06:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:34.108 19:06:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:34.108 19:06:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:34.365 19:06:44 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:34.365 19:06:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:34.365 19:06:44 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:34.365 19:06:44 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:34.365 19:06:44 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:34.365 19:06:44 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:34.365 19:06:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:34.623 19:06:44 keyring_linux -- keyring/linux.sh@25 -- # sn=100156637 00:36:34.623 19:06:44 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:34.623 19:06:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:34.623 19:06:44 keyring_linux -- keyring/linux.sh@26 -- # [[ 100156637 == \1\0\0\1\5\6\6\3\7 ]] 00:36:34.623 19:06:44 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 100156637 00:36:34.623 19:06:44 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:34.623 19:06:44 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:34.623 Running I/O for 1 seconds... 00:36:35.995 00:36:35.995 Latency(us) 00:36:35.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:35.995 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:35.995 nvme0n1 : 1.04 2187.74 8.55 0.00 0.00 57341.70 15146.10 75342.13 00:36:35.995 =================================================================================================================== 00:36:35.995 Total : 2187.74 8.55 0.00 0.00 57341.70 15146.10 75342.13 00:36:35.995 0 00:36:35.995 19:06:45 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:35.995 19:06:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:35.995 19:06:46 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:35.995 19:06:46 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:35.995 19:06:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:35.995 19:06:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:35.995 19:06:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.995 19:06:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:36.252 19:06:46 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:36.252 19:06:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:36.252 19:06:46 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:36.252 19:06:46 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:36.252 19:06:46 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:36:36.252 19:06:46 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:36.252 19:06:46 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:36.252 19:06:46 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:36.252 19:06:46 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:36.252 19:06:46 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:36.252 19:06:46 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:36.252 19:06:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:36.510 [2024-07-20 19:06:46.620472] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:36.510 [2024-07-20 19:06:46.620930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fdea0 (107): Transport endpoint is not connected 00:36:36.510 [2024-07-20 19:06:46.621920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fdea0 (9): Bad file descriptor 00:36:36.510 [2024-07-20 19:06:46.622919] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:36.510 [2024-07-20 19:06:46.622940] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:36.510 [2024-07-20 19:06:46.622954] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:36.510 request: 00:36:36.510 { 00:36:36.510 "name": "nvme0", 00:36:36.510 "trtype": "tcp", 00:36:36.510 "traddr": "127.0.0.1", 00:36:36.510 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:36.510 "adrfam": "ipv4", 00:36:36.510 "trsvcid": "4420", 00:36:36.510 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:36.510 "psk": ":spdk-test:key1", 00:36:36.510 "method": "bdev_nvme_attach_controller", 00:36:36.510 "req_id": 1 00:36:36.510 } 00:36:36.510 Got JSON-RPC error response 00:36:36.510 response: 00:36:36.510 { 00:36:36.510 "code": -5, 00:36:36.510 "message": "Input/output error" 00:36:36.510 } 00:36:36.510 19:06:46 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:36:36.510 19:06:46 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:36.511 19:06:46 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:36.511 19:06:46 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@33 -- # sn=100156637 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 100156637 00:36:36.511 1 links removed 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@33 -- # sn=189214613 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 189214613 00:36:36.511 1 links removed 00:36:36.511 19:06:46 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1571618 00:36:36.511 19:06:46 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 1571618 ']' 00:36:36.511 19:06:46 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 1571618 00:36:36.511 19:06:46 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:36:36.511 19:06:46 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:36.511 19:06:46 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1571618 00:36:36.511 19:06:46 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:36.511 19:06:46 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:36.511 19:06:46 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1571618' 00:36:36.511 killing process with pid 1571618 00:36:36.511 19:06:46 keyring_linux -- common/autotest_common.sh@965 -- # kill 1571618 00:36:36.511 Received shutdown signal, test time was about 1.000000 seconds 00:36:36.511 00:36:36.511 Latency(us) 00:36:36.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.511 =================================================================================================================== 00:36:36.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:36.511 19:06:46 keyring_linux -- common/autotest_common.sh@970 -- # wait 1571618 00:36:36.768 19:06:46 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1571609 00:36:36.768 19:06:46 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 1571609 ']' 00:36:36.768 19:06:46 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 1571609 00:36:36.768 19:06:46 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:36:36.768 19:06:46 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:36.768 19:06:46 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1571609 00:36:36.768 19:06:46 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:36.768 19:06:46 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:36.768 19:06:46 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1571609' 00:36:36.768 killing process with pid 1571609 00:36:36.768 19:06:46 keyring_linux -- common/autotest_common.sh@965 -- # kill 1571609 00:36:36.768 19:06:46 keyring_linux -- common/autotest_common.sh@970 -- # wait 1571609 00:36:37.026 00:36:37.026 real 0m4.919s 00:36:37.026 user 0m9.102s 00:36:37.026 sys 0m1.390s 00:36:37.026 19:06:47 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:37.026 19:06:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:37.026 ************************************ 00:36:37.026 END TEST keyring_linux 00:36:37.026 ************************************ 00:36:37.026 19:06:47 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:37.026 19:06:47 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:37.026 19:06:47 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:37.026 19:06:47 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:36:37.026 19:06:47 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:36:37.026 19:06:47 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:37.026 19:06:47 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:37.026 19:06:47 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:37.026 19:06:47 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:37.026 19:06:47 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:37.026 19:06:47 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:37.026 19:06:47 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:37.026 19:06:47 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:37.026 19:06:47 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:37.026 19:06:47 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:37.026 19:06:47 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:36:37.026 19:06:47 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:36:37.026 19:06:47 -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:37.026 19:06:47 -- common/autotest_common.sh@10 -- # set +x 00:36:37.287 19:06:47 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:36:37.287 19:06:47 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:36:37.287 19:06:47 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:36:37.287 19:06:47 -- common/autotest_common.sh@10 -- # set +x 00:36:39.193 INFO: APP EXITING 00:36:39.193 INFO: killing all VMs 00:36:39.193 INFO: killing vhost app 00:36:39.193 INFO: EXIT DONE 00:36:40.141 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:36:40.141 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:40.141 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:40.141 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:40.141 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:40.141 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:40.141 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:40.141 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:40.141 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:40.141 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:40.141 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:40.141 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:40.141 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:40.141 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:40.141 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:40.141 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:40.141 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:41.514 Cleaning 00:36:41.514 Removing: /var/run/dpdk/spdk0/config 00:36:41.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:41.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:41.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:41.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:41.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:41.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:41.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:41.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:41.514 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:41.514 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:41.514 Removing: /var/run/dpdk/spdk1/config 00:36:41.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:41.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:41.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:41.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:41.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:41.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:41.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:41.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:41.514 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:41.514 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:41.514 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:41.514 Removing: /var/run/dpdk/spdk2/config 00:36:41.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:41.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:41.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:41.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:41.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:41.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:41.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:41.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:41.514 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:41.514 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:41.514 Removing: /var/run/dpdk/spdk3/config 00:36:41.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:41.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:41.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:41.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:41.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:41.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:41.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:41.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:41.514 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:41.514 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:41.514 Removing: /var/run/dpdk/spdk4/config 00:36:41.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:41.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:41.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:41.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:41.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:41.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:41.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:41.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:41.514 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:41.514 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:41.514 Removing: /dev/shm/bdev_svc_trace.1 00:36:41.514 Removing: /dev/shm/nvmf_trace.0 00:36:41.514 Removing: /dev/shm/spdk_tgt_trace.pid1253611 00:36:41.514 Removing: /var/run/dpdk/spdk0 00:36:41.514 Removing: /var/run/dpdk/spdk1 00:36:41.514 Removing: /var/run/dpdk/spdk2 00:36:41.514 Removing: /var/run/dpdk/spdk3 00:36:41.514 Removing: /var/run/dpdk/spdk4 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1252065 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1252793 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1253611 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1254044 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1254737 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1254877 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1255604 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1255616 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1255860 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1257066 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1258088 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1258282 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1258483 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1258786 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1258975 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1259135 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1259298 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1259528 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1260171 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1263023 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1263192 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1263354 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1263357 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1263789 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1263792 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1264222 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1264232 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1264517 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1264532 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1264694 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1264739 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1265193 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1265347 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1265546 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1265714 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1265735 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1265927 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1266078 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1266262 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1266512 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1266666 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1266823 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1267059 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1267258 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1267410 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1267569 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1267835 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1267998 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1268155 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1268320 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1268582 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1268743 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1268900 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1269171 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1269334 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1269494 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1269652 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1269838 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1270044 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1272101 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1325071 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1327561 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1334515 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1337699 00:36:41.514 Removing: /var/run/dpdk/spdk_pid1340355 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1340816 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1347916 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1347918 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1348460 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1349107 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1349771 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1350261 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1350278 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1350430 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1350564 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1350566 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1351729 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1352376 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1352932 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1353328 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1353443 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1353588 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1354471 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1355202 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1360547 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1360818 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1363328 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1367020 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1369189 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1375433 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1380627 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1381821 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1382601 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1393166 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1395377 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1420302 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1423127 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1424258 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1425577 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1425712 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1425816 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1425870 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1426300 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1427593 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1428220 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1428642 00:36:41.772 Removing: /var/run/dpdk/spdk_pid1430253 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1430560 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1431118 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1433515 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1436885 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1441027 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1463016 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1465637 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1470038 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1470988 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1472078 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1474622 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1476853 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1481056 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1481059 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1483822 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1483965 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1484211 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1484483 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1484488 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1485563 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1486866 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1488047 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1489221 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1490399 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1491585 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1495379 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1495709 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1496991 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1497773 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1502054 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1503923 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1507329 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1510778 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1516985 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1521325 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1521327 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1533161 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1533630 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1534285 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1534973 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1535524 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1535954 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1536463 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1536868 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1539359 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1539503 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1543287 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1543338 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1545059 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1549964 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1549976 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1552864 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1554262 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1555662 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1556401 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1557798 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1558691 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1563868 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1564344 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1564727 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1566670 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1567064 00:36:41.773 Removing: /var/run/dpdk/spdk_pid1567461 00:36:42.031 Removing: /var/run/dpdk/spdk_pid1569663 00:36:42.031 Removing: /var/run/dpdk/spdk_pid1569790 00:36:42.031 Removing: /var/run/dpdk/spdk_pid1571130 00:36:42.031 Removing: /var/run/dpdk/spdk_pid1571609 00:36:42.031 Removing: /var/run/dpdk/spdk_pid1571618 00:36:42.031 Clean 00:36:42.031 19:06:52 -- common/autotest_common.sh@1447 -- # return 0 00:36:42.031 19:06:52 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:36:42.031 19:06:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:42.031 19:06:52 -- common/autotest_common.sh@10 -- # set +x 00:36:42.031 19:06:52 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:36:42.031 19:06:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:42.031 19:06:52 -- common/autotest_common.sh@10 -- # set +x 00:36:42.031 19:06:52 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:42.031 19:06:52 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:42.031 19:06:52 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:42.031 19:06:52 -- spdk/autotest.sh@391 -- # hash lcov 00:36:42.031 19:06:52 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:42.031 19:06:52 -- spdk/autotest.sh@393 -- # hostname 00:36:42.031 19:06:52 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:42.288 geninfo: WARNING: invalid characters removed from testname! 00:37:14.347 19:07:19 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:14.347 19:07:23 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:16.871 19:07:26 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:20.151 19:07:29 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:22.710 19:07:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:25.983 19:07:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:28.507 19:07:38 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:28.507 19:07:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:28.507 19:07:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:28.507 19:07:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:28.507 19:07:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:28.507 19:07:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.507 19:07:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.507 19:07:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.507 19:07:38 -- paths/export.sh@5 -- $ export PATH 00:37:28.507 19:07:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.507 19:07:38 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:28.507 19:07:38 -- common/autobuild_common.sh@437 -- $ date +%s 00:37:28.507 19:07:38 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721495258.XXXXXX 00:37:28.507 19:07:38 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721495258.dtFNWN 00:37:28.507 19:07:38 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:37:28.507 19:07:38 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:37:28.507 19:07:38 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:28.507 19:07:38 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:28.507 19:07:38 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:28.507 19:07:38 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:28.507 19:07:38 -- common/autobuild_common.sh@453 -- $ get_config_params 00:37:28.507 19:07:38 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:37:28.507 19:07:38 -- common/autotest_common.sh@10 -- $ set +x 00:37:28.507 19:07:38 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:28.507 19:07:38 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:37:28.507 19:07:38 -- pm/common@17 -- $ local monitor 00:37:28.507 19:07:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:28.507 19:07:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:28.507 19:07:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:28.507 19:07:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:28.507 19:07:38 -- pm/common@21 -- $ date +%s 00:37:28.507 19:07:38 -- pm/common@21 -- $ date +%s 00:37:28.507 19:07:38 -- pm/common@25 -- $ sleep 1 00:37:28.507 19:07:38 -- pm/common@21 -- $ date +%s 00:37:28.507 19:07:38 -- pm/common@21 -- $ date +%s 00:37:28.507 19:07:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721495258 00:37:28.507 19:07:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721495258 00:37:28.507 19:07:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721495258 00:37:28.507 19:07:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721495258 00:37:28.507 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721495258_collect-vmstat.pm.log 00:37:28.507 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721495258_collect-cpu-load.pm.log 00:37:28.507 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721495258_collect-cpu-temp.pm.log 00:37:28.507 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721495258_collect-bmc-pm.bmc.pm.log 00:37:29.880 19:07:39 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:37:29.880 19:07:39 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:29.880 19:07:39 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:29.880 19:07:39 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:29.880 19:07:39 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:29.880 19:07:39 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:29.880 19:07:39 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:29.880 19:07:39 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:29.880 19:07:39 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:29.880 19:07:39 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:29.880 19:07:39 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:29.880 19:07:39 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:29.880 19:07:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:29.880 19:07:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:29.880 19:07:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.880 19:07:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:29.880 19:07:39 -- pm/common@44 -- $ pid=1582819 00:37:29.880 19:07:39 -- pm/common@50 -- $ kill -TERM 1582819 00:37:29.880 19:07:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.880 19:07:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:29.880 19:07:39 -- pm/common@44 -- $ pid=1582821 00:37:29.880 19:07:39 -- pm/common@50 -- $ kill -TERM 1582821 00:37:29.880 19:07:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.880 19:07:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:29.880 19:07:39 -- pm/common@44 -- $ pid=1582823 00:37:29.880 19:07:39 -- pm/common@50 -- $ kill -TERM 1582823 00:37:29.880 19:07:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.880 19:07:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:29.880 19:07:39 -- pm/common@44 -- $ pid=1582850 00:37:29.880 19:07:39 -- pm/common@50 -- $ sudo -E kill -TERM 1582850 00:37:29.880 + [[ -n 1147968 ]] 00:37:29.880 + sudo kill 1147968 00:37:29.889 [Pipeline] } 00:37:29.905 [Pipeline] // stage 00:37:29.908 [Pipeline] } 00:37:29.924 [Pipeline] // timeout 00:37:29.928 [Pipeline] } 00:37:29.943 [Pipeline] // catchError 00:37:29.947 [Pipeline] } 00:37:29.962 [Pipeline] // wrap 00:37:29.965 [Pipeline] } 00:37:29.977 [Pipeline] // catchError 00:37:29.983 [Pipeline] stage 00:37:29.985 [Pipeline] { (Epilogue) 00:37:29.996 [Pipeline] catchError 00:37:29.998 [Pipeline] { 00:37:30.010 [Pipeline] echo 00:37:30.011 Cleanup processes 00:37:30.017 [Pipeline] sh 00:37:30.296 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:30.296 1582951 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:30.296 1583086 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:30.307 [Pipeline] sh 00:37:30.583 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:30.583 ++ grep -v 'sudo pgrep' 00:37:30.583 ++ awk '{print $1}' 00:37:30.583 + sudo kill -9 1582951 00:37:30.594 [Pipeline] sh 00:37:30.874 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:40.863 [Pipeline] sh 00:37:41.141 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:41.142 Artifacts sizes are good 00:37:41.155 [Pipeline] archiveArtifacts 00:37:41.161 Archiving artifacts 00:37:41.347 [Pipeline] sh 00:37:41.624 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:41.637 [Pipeline] cleanWs 00:37:41.645 [WS-CLEANUP] Deleting project workspace... 00:37:41.645 [WS-CLEANUP] Deferred wipeout is used... 00:37:41.650 [WS-CLEANUP] done 00:37:41.651 [Pipeline] } 00:37:41.668 [Pipeline] // catchError 00:37:41.678 [Pipeline] sh 00:37:41.967 + logger -p user.info -t JENKINS-CI 00:37:41.975 [Pipeline] } 00:37:41.990 [Pipeline] // stage 00:37:41.995 [Pipeline] } 00:37:42.012 [Pipeline] // node 00:37:42.017 [Pipeline] End of Pipeline 00:37:42.053 Finished: SUCCESS